00:00:00.002 Started by upstream project "autotest-per-patch" build number 120493 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.098 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.099 The recommended git tool is: git 00:00:00.099 using credential 00000000-0000-0000-0000-000000000002 00:00:00.101 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.140 Fetching changes from the remote Git repository 00:00:00.142 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.176 Using shallow fetch with depth 1 00:00:00.176 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.176 > git --version # timeout=10 00:00:00.201 > git --version # 'git version 2.39.2' 00:00:00.201 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.202 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.202 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.619 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.628 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.638 Checking out Revision 27f13fcb4eea6a447c9f3d131408acb483141c09 (FETCH_HEAD) 00:00:04.638 > git config core.sparsecheckout # timeout=10 00:00:04.647 > git read-tree -mu HEAD # timeout=10 00:00:04.661 > git checkout -f 27f13fcb4eea6a447c9f3d131408acb483141c09 # timeout=5 00:00:04.677 Commit message: "docker/pdu_power: add PDU APC-C14 and APC-C18" 00:00:04.677 > git rev-list --no-walk 27f13fcb4eea6a447c9f3d131408acb483141c09 # timeout=10 00:00:04.754 [Pipeline] Start of Pipeline 00:00:04.768 [Pipeline] library 00:00:04.770 Loading library shm_lib@master 00:00:04.770 Library shm_lib@master is cached. Copying from home. 00:00:04.784 [Pipeline] node 00:00:19.786 Still waiting to schedule task 00:00:19.787 Waiting for next available executor on ‘vagrant-vm-host’ 00:02:15.563 Running on VM-host-SM16 in /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:02:15.565 [Pipeline] { 00:02:15.575 [Pipeline] catchError 00:02:15.576 [Pipeline] { 00:02:15.590 [Pipeline] wrap 00:02:15.601 [Pipeline] { 00:02:15.606 [Pipeline] stage 00:02:15.608 [Pipeline] { (Prologue) 00:02:15.630 [Pipeline] echo 00:02:15.631 Node: VM-host-SM16 00:02:15.637 [Pipeline] cleanWs 00:02:15.646 [WS-CLEANUP] Deleting project workspace... 00:02:15.647 [WS-CLEANUP] Deferred wipeout is used... 00:02:15.653 [WS-CLEANUP] done 00:02:15.820 [Pipeline] setCustomBuildProperty 00:02:15.902 [Pipeline] nodesByLabel 00:02:15.904 Found a total of 1 nodes with the 'sorcerer' label 00:02:15.916 [Pipeline] httpRequest 00:02:15.920 HttpMethod: GET 00:02:15.921 URL: http://10.211.164.101/packages/jbp_27f13fcb4eea6a447c9f3d131408acb483141c09.tar.gz 00:02:15.924 Sending request to url: http://10.211.164.101/packages/jbp_27f13fcb4eea6a447c9f3d131408acb483141c09.tar.gz 00:02:15.927 Response Code: HTTP/1.1 200 OK 00:02:15.928 Success: Status code 200 is in the accepted range: 200,404 00:02:15.929 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp_27f13fcb4eea6a447c9f3d131408acb483141c09.tar.gz 00:02:16.065 [Pipeline] sh 00:02:16.341 + tar --no-same-owner -xf jbp_27f13fcb4eea6a447c9f3d131408acb483141c09.tar.gz 00:02:16.362 [Pipeline] httpRequest 00:02:16.366 HttpMethod: GET 00:02:16.367 URL: http://10.211.164.101/packages/spdk_74bc86fe4f67fcf712651f475ba668e2492b78ec.tar.gz 00:02:16.367 Sending request to url: http://10.211.164.101/packages/spdk_74bc86fe4f67fcf712651f475ba668e2492b78ec.tar.gz 00:02:16.368 Response Code: HTTP/1.1 200 OK 00:02:16.369 Success: Status code 200 is in the accepted range: 200,404 00:02:16.369 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk_74bc86fe4f67fcf712651f475ba668e2492b78ec.tar.gz 00:02:18.527 [Pipeline] sh 00:02:18.805 + tar --no-same-owner -xf spdk_74bc86fe4f67fcf712651f475ba668e2492b78ec.tar.gz 00:02:22.100 [Pipeline] sh 00:02:22.380 + git -C spdk log --oneline -n5 00:02:22.380 74bc86fe4 raid: don't remove an unconfigured base bdev 00:02:22.381 21748f1da raid: fix race between starting rebuild and creating io channel 00:02:22.381 480afb9a1 raid: remove base_bdev_lock 00:02:22.381 b01acb55d raid: fix some issues in raid_bdev_write_config_json() 00:02:22.381 0d5f01bd8 raid: examine other bdevs when starting from superblock 00:02:22.401 [Pipeline] writeFile 00:02:22.420 [Pipeline] sh 00:02:22.701 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:22.713 [Pipeline] sh 00:02:22.994 + cat autorun-spdk.conf 00:02:22.994 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:22.994 SPDK_TEST_NVMF=1 00:02:22.994 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:22.994 SPDK_TEST_USDT=1 00:02:22.994 SPDK_TEST_NVMF_MDNS=1 00:02:22.994 SPDK_RUN_UBSAN=1 00:02:22.994 NET_TYPE=virt 00:02:22.994 SPDK_JSONRPC_GO_CLIENT=1 00:02:22.994 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:23.001 RUN_NIGHTLY=0 00:02:23.003 [Pipeline] } 00:02:23.022 [Pipeline] // stage 00:02:23.036 [Pipeline] stage 00:02:23.039 [Pipeline] { (Run VM) 00:02:23.055 [Pipeline] sh 00:02:23.381 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:23.381 + echo 'Start stage prepare_nvme.sh' 00:02:23.381 Start stage prepare_nvme.sh 00:02:23.381 + [[ -n 0 ]] 00:02:23.382 + disk_prefix=ex0 00:02:23.382 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 ]] 00:02:23.382 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf ]] 00:02:23.382 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf 00:02:23.382 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:23.382 ++ SPDK_TEST_NVMF=1 00:02:23.382 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:23.382 ++ SPDK_TEST_USDT=1 00:02:23.382 ++ SPDK_TEST_NVMF_MDNS=1 00:02:23.382 ++ SPDK_RUN_UBSAN=1 00:02:23.382 ++ NET_TYPE=virt 00:02:23.382 ++ SPDK_JSONRPC_GO_CLIENT=1 00:02:23.382 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:23.382 ++ RUN_NIGHTLY=0 00:02:23.382 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:02:23.382 + nvme_files=() 00:02:23.382 + declare -A nvme_files 00:02:23.382 + backend_dir=/var/lib/libvirt/images/backends 00:02:23.382 + nvme_files['nvme.img']=5G 00:02:23.382 + nvme_files['nvme-cmb.img']=5G 00:02:23.382 + nvme_files['nvme-multi0.img']=4G 00:02:23.382 + nvme_files['nvme-multi1.img']=4G 00:02:23.382 + nvme_files['nvme-multi2.img']=4G 00:02:23.382 + nvme_files['nvme-openstack.img']=8G 00:02:23.382 + nvme_files['nvme-zns.img']=5G 00:02:23.382 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:23.382 + (( SPDK_TEST_FTL == 1 )) 00:02:23.382 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:23.382 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:23.382 + for nvme in "${!nvme_files[@]}" 00:02:23.382 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:02:23.382 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:23.382 + for nvme in "${!nvme_files[@]}" 00:02:23.382 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:02:23.382 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:23.382 + for nvme in "${!nvme_files[@]}" 00:02:23.382 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:02:23.382 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:23.382 + for nvme in "${!nvme_files[@]}" 00:02:23.382 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:02:23.382 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:23.382 + for nvme in "${!nvme_files[@]}" 00:02:23.382 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:02:23.382 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:23.382 + for nvme in "${!nvme_files[@]}" 00:02:23.382 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:02:23.382 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:23.382 + for nvme in "${!nvme_files[@]}" 00:02:23.382 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:02:24.318 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:24.318 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:02:24.318 + echo 'End stage prepare_nvme.sh' 00:02:24.318 End stage prepare_nvme.sh 00:02:24.331 [Pipeline] sh 00:02:24.612 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:24.612 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora38 00:02:24.612 00:02:24.612 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/scripts/vagrant 00:02:24.612 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk 00:02:24.612 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:02:24.612 HELP=0 00:02:24.612 DRY_RUN=0 00:02:24.612 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:02:24.612 NVME_DISKS_TYPE=nvme,nvme, 00:02:24.612 NVME_AUTO_CREATE=0 00:02:24.612 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:02:24.612 NVME_CMB=,, 00:02:24.612 NVME_PMR=,, 00:02:24.612 NVME_ZNS=,, 00:02:24.612 NVME_MS=,, 00:02:24.612 NVME_FDP=,, 00:02:24.612 SPDK_VAGRANT_DISTRO=fedora38 00:02:24.612 SPDK_VAGRANT_VMCPU=10 00:02:24.612 SPDK_VAGRANT_VMRAM=12288 00:02:24.612 SPDK_VAGRANT_PROVIDER=libvirt 00:02:24.612 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:24.612 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:24.612 SPDK_OPENSTACK_NETWORK=0 00:02:24.612 VAGRANT_PACKAGE_BOX=0 00:02:24.612 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:02:24.612 FORCE_DISTRO=true 00:02:24.612 VAGRANT_BOX_VERSION= 00:02:24.612 EXTRA_VAGRANTFILES= 00:02:24.612 NIC_MODEL=e1000 00:02:24.612 00:02:24.612 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt' 00:02:24.612 /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:02:27.895 Bringing machine 'default' up with 'libvirt' provider... 00:02:28.461 ==> default: Creating image (snapshot of base box volume). 00:02:28.720 ==> default: Creating domain with the following settings... 00:02:28.720 ==> default: -- Name: fedora38-38-1.6-1705279005-2131_default_1713370442_2c5766648cc6601134d4 00:02:28.720 ==> default: -- Domain type: kvm 00:02:28.720 ==> default: -- Cpus: 10 00:02:28.720 ==> default: -- Feature: acpi 00:02:28.720 ==> default: -- Feature: apic 00:02:28.720 ==> default: -- Feature: pae 00:02:28.720 ==> default: -- Memory: 12288M 00:02:28.720 ==> default: -- Memory Backing: hugepages: 00:02:28.720 ==> default: -- Management MAC: 00:02:28.720 ==> default: -- Loader: 00:02:28.720 ==> default: -- Nvram: 00:02:28.720 ==> default: -- Base box: spdk/fedora38 00:02:28.720 ==> default: -- Storage pool: default 00:02:28.720 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1705279005-2131_default_1713370442_2c5766648cc6601134d4.img (20G) 00:02:28.720 ==> default: -- Volume Cache: default 00:02:28.720 ==> default: -- Kernel: 00:02:28.720 ==> default: -- Initrd: 00:02:28.720 ==> default: -- Graphics Type: vnc 00:02:28.720 ==> default: -- Graphics Port: -1 00:02:28.720 ==> default: -- Graphics IP: 127.0.0.1 00:02:28.720 ==> default: -- Graphics Password: Not defined 00:02:28.720 ==> default: -- Video Type: cirrus 00:02:28.720 ==> default: -- Video VRAM: 9216 00:02:28.720 ==> default: -- Sound Type: 00:02:28.720 ==> default: -- Keymap: en-us 00:02:28.720 ==> default: -- TPM Path: 00:02:28.720 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:28.720 ==> default: -- Command line args: 00:02:28.720 ==> default: -> value=-device, 00:02:28.720 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:28.720 ==> default: -> value=-drive, 00:02:28.720 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:02:28.720 ==> default: -> value=-device, 00:02:28.720 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:28.720 ==> default: -> value=-device, 00:02:28.720 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:28.720 ==> default: -> value=-drive, 00:02:28.720 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:02:28.720 ==> default: -> value=-device, 00:02:28.720 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:28.720 ==> default: -> value=-drive, 00:02:28.720 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:02:28.720 ==> default: -> value=-device, 00:02:28.720 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:28.720 ==> default: -> value=-drive, 00:02:28.720 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:02:28.720 ==> default: -> value=-device, 00:02:28.720 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:28.979 ==> default: Creating shared folders metadata... 00:02:28.979 ==> default: Starting domain. 00:02:30.359 ==> default: Waiting for domain to get an IP address... 00:02:52.282 ==> default: Waiting for SSH to become available... 00:02:52.282 ==> default: Configuring and enabling network interfaces... 00:02:54.827 default: SSH address: 192.168.121.66:22 00:02:54.827 default: SSH username: vagrant 00:02:54.827 default: SSH auth method: private key 00:02:56.726 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:03:04.854 ==> default: Mounting SSHFS shared folder... 00:03:05.788 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:03:05.788 ==> default: Checking Mount.. 00:03:07.164 ==> default: Folder Successfully Mounted! 00:03:07.164 ==> default: Running provisioner: file... 00:03:08.099 default: ~/.gitconfig => .gitconfig 00:03:08.357 00:03:08.357 SUCCESS! 00:03:08.357 00:03:08.357 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt and type "vagrant ssh" to use. 00:03:08.357 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:03:08.357 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt" to destroy all trace of vm. 00:03:08.357 00:03:08.367 [Pipeline] } 00:03:08.383 [Pipeline] // stage 00:03:08.392 [Pipeline] dir 00:03:08.393 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt 00:03:08.394 [Pipeline] { 00:03:08.408 [Pipeline] catchError 00:03:08.410 [Pipeline] { 00:03:08.423 [Pipeline] sh 00:03:08.702 + vagrant ssh-config --host vagrant 00:03:08.702 + sed -ne /^Host/,$p 00:03:08.702 + tee ssh_conf 00:03:12.890 Host vagrant 00:03:12.890 HostName 192.168.121.66 00:03:12.890 User vagrant 00:03:12.890 Port 22 00:03:12.890 UserKnownHostsFile /dev/null 00:03:12.890 StrictHostKeyChecking no 00:03:12.890 PasswordAuthentication no 00:03:12.890 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1705279005-2131/libvirt/fedora38 00:03:12.890 IdentitiesOnly yes 00:03:12.890 LogLevel FATAL 00:03:12.890 ForwardAgent yes 00:03:12.890 ForwardX11 yes 00:03:12.890 00:03:12.904 [Pipeline] withEnv 00:03:12.906 [Pipeline] { 00:03:12.921 [Pipeline] sh 00:03:13.195 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:03:13.195 source /etc/os-release 00:03:13.195 [[ -e /image.version ]] && img=$(< /image.version) 00:03:13.195 # Minimal, systemd-like check. 00:03:13.195 if [[ -e /.dockerenv ]]; then 00:03:13.195 # Clear garbage from the node's name: 00:03:13.195 # agt-er_autotest_547-896 -> autotest_547-896 00:03:13.195 # $HOSTNAME is the actual container id 00:03:13.195 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:03:13.195 if mountpoint -q /etc/hostname; then 00:03:13.195 # We can assume this is a mount from a host where container is running, 00:03:13.195 # so fetch its hostname to easily identify the target swarm worker. 00:03:13.195 container="$(< /etc/hostname) ($agent)" 00:03:13.195 else 00:03:13.195 # Fallback 00:03:13.195 container=$agent 00:03:13.195 fi 00:03:13.195 fi 00:03:13.195 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:03:13.195 00:03:13.462 [Pipeline] } 00:03:13.479 [Pipeline] // withEnv 00:03:13.487 [Pipeline] setCustomBuildProperty 00:03:13.500 [Pipeline] stage 00:03:13.502 [Pipeline] { (Tests) 00:03:13.522 [Pipeline] sh 00:03:13.800 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:03:13.815 [Pipeline] timeout 00:03:13.816 Timeout set to expire in 40 min 00:03:13.818 [Pipeline] { 00:03:13.835 [Pipeline] sh 00:03:14.127 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:03:14.706 HEAD is now at 74bc86fe4 raid: don't remove an unconfigured base bdev 00:03:14.720 [Pipeline] sh 00:03:14.998 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:03:15.265 [Pipeline] sh 00:03:15.552 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:15.565 [Pipeline] sh 00:03:15.838 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:03:15.838 ++ readlink -f spdk_repo 00:03:15.838 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:15.838 + [[ -n /home/vagrant/spdk_repo ]] 00:03:15.838 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:15.838 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:15.838 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:15.838 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:15.838 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:15.838 + cd /home/vagrant/spdk_repo 00:03:15.838 + source /etc/os-release 00:03:15.838 ++ NAME='Fedora Linux' 00:03:15.838 ++ VERSION='38 (Cloud Edition)' 00:03:15.838 ++ ID=fedora 00:03:15.838 ++ VERSION_ID=38 00:03:15.838 ++ VERSION_CODENAME= 00:03:15.838 ++ PLATFORM_ID=platform:f38 00:03:15.838 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:03:15.838 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:15.838 ++ LOGO=fedora-logo-icon 00:03:15.838 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:03:15.838 ++ HOME_URL=https://fedoraproject.org/ 00:03:15.838 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:03:15.838 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:15.838 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:15.838 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:15.838 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:03:15.838 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:15.838 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:03:15.838 ++ SUPPORT_END=2024-05-14 00:03:15.838 ++ VARIANT='Cloud Edition' 00:03:15.838 ++ VARIANT_ID=cloud 00:03:15.838 + uname -a 00:03:15.838 Linux fedora38-cloud-1705279005-2131 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:03:16.097 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:16.372 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:16.372 Hugepages 00:03:16.372 node hugesize free / total 00:03:16.372 node0 1048576kB 0 / 0 00:03:16.647 node0 2048kB 0 / 0 00:03:16.647 00:03:16.647 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:16.647 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:16.647 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:16.647 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:16.647 + rm -f /tmp/spdk-ld-path 00:03:16.647 + source autorun-spdk.conf 00:03:16.647 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:16.647 ++ SPDK_TEST_NVMF=1 00:03:16.647 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:16.647 ++ SPDK_TEST_USDT=1 00:03:16.647 ++ SPDK_TEST_NVMF_MDNS=1 00:03:16.647 ++ SPDK_RUN_UBSAN=1 00:03:16.647 ++ NET_TYPE=virt 00:03:16.647 ++ SPDK_JSONRPC_GO_CLIENT=1 00:03:16.647 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:16.647 ++ RUN_NIGHTLY=0 00:03:16.647 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:16.647 + [[ -n '' ]] 00:03:16.647 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:16.647 + for M in /var/spdk/build-*-manifest.txt 00:03:16.647 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:16.647 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:16.647 + for M in /var/spdk/build-*-manifest.txt 00:03:16.647 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:16.647 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:16.647 ++ uname 00:03:16.647 + [[ Linux == \L\i\n\u\x ]] 00:03:16.647 + sudo dmesg -T 00:03:16.647 + sudo dmesg --clear 00:03:16.647 + dmesg_pid=5277 00:03:16.647 + sudo dmesg -Tw 00:03:16.647 + [[ Fedora Linux == FreeBSD ]] 00:03:16.647 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:16.647 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:16.647 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:16.647 + [[ -x /usr/src/fio-static/fio ]] 00:03:16.647 + export FIO_BIN=/usr/src/fio-static/fio 00:03:16.647 + FIO_BIN=/usr/src/fio-static/fio 00:03:16.647 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:16.647 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:16.647 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:16.647 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:16.647 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:16.647 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:16.647 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:16.647 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:16.647 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:16.647 Test configuration: 00:03:16.647 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:16.647 SPDK_TEST_NVMF=1 00:03:16.647 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:16.647 SPDK_TEST_USDT=1 00:03:16.647 SPDK_TEST_NVMF_MDNS=1 00:03:16.647 SPDK_RUN_UBSAN=1 00:03:16.647 NET_TYPE=virt 00:03:16.647 SPDK_JSONRPC_GO_CLIENT=1 00:03:16.647 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:16.647 RUN_NIGHTLY=0 16:14:50 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:16.647 16:14:50 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:16.647 16:14:50 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:16.647 16:14:50 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:16.914 16:14:50 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.914 16:14:50 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.914 16:14:50 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.914 16:14:50 -- paths/export.sh@5 -- $ export PATH 00:03:16.915 16:14:50 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.915 16:14:50 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:16.915 16:14:50 -- common/autobuild_common.sh@435 -- $ date +%s 00:03:16.915 16:14:50 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713370490.XXXXXX 00:03:16.915 16:14:50 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713370490.oDcBTU 00:03:16.915 16:14:50 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:03:16.915 16:14:50 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:03:16.915 16:14:50 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:03:16.915 16:14:50 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:16.915 16:14:50 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:16.915 16:14:50 -- common/autobuild_common.sh@451 -- $ get_config_params 00:03:16.915 16:14:50 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:03:16.915 16:14:50 -- common/autotest_common.sh@10 -- $ set +x 00:03:16.915 16:14:50 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:03:16.915 16:14:50 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:03:16.915 16:14:50 -- pm/common@17 -- $ local monitor 00:03:16.915 16:14:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.915 16:14:50 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=5312 00:03:16.915 16:14:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.915 16:14:50 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=5314 00:03:16.915 16:14:50 -- pm/common@26 -- $ sleep 1 00:03:16.915 16:14:50 -- pm/common@21 -- $ date +%s 00:03:16.915 16:14:50 -- pm/common@21 -- $ date +%s 00:03:16.915 16:14:50 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1713370490 00:03:16.915 16:14:50 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1713370490 00:03:16.915 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1713370490_collect-vmstat.pm.log 00:03:16.915 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1713370490_collect-cpu-load.pm.log 00:03:17.911 16:14:51 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:03:17.911 16:14:51 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:17.911 16:14:51 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:17.911 16:14:51 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:17.911 16:14:51 -- spdk/autobuild.sh@16 -- $ date -u 00:03:17.911 Wed Apr 17 04:14:51 PM UTC 2024 00:03:17.911 16:14:51 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:17.911 v24.05-pre-400-g74bc86fe4 00:03:17.911 16:14:51 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:17.911 16:14:51 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:17.911 16:14:51 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:17.911 16:14:51 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:03:17.911 16:14:51 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:03:17.911 16:14:51 -- common/autotest_common.sh@10 -- $ set +x 00:03:17.911 ************************************ 00:03:17.911 START TEST ubsan 00:03:17.911 ************************************ 00:03:17.911 using ubsan 00:03:17.911 16:14:51 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:03:17.911 00:03:17.911 real 0m0.000s 00:03:17.911 user 0m0.000s 00:03:17.911 sys 0m0.000s 00:03:17.911 16:14:51 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:03:17.911 ************************************ 00:03:17.911 END TEST ubsan 00:03:17.911 ************************************ 00:03:17.911 16:14:51 -- common/autotest_common.sh@10 -- $ set +x 00:03:17.911 16:14:51 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:17.911 16:14:51 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:17.911 16:14:51 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:17.911 16:14:51 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:17.911 16:14:51 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:17.911 16:14:51 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:17.911 16:14:51 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:17.911 16:14:51 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:17.911 16:14:51 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:03:17.911 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:17.911 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:18.476 Using 'verbs' RDMA provider 00:03:31.267 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:43.535 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:43.535 go version go1.21.1 linux/amd64 00:03:43.837 Creating mk/config.mk...done. 00:03:43.837 Creating mk/cc.flags.mk...done. 00:03:43.837 Type 'make' to build. 00:03:43.837 16:15:17 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:43.837 16:15:17 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:03:43.837 16:15:17 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:03:43.837 16:15:17 -- common/autotest_common.sh@10 -- $ set +x 00:03:44.095 ************************************ 00:03:44.095 START TEST make 00:03:44.095 ************************************ 00:03:44.095 16:15:17 -- common/autotest_common.sh@1111 -- $ make -j10 00:03:44.353 make[1]: Nothing to be done for 'all'. 00:03:59.217 The Meson build system 00:03:59.217 Version: 1.3.1 00:03:59.217 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:59.217 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:59.217 Build type: native build 00:03:59.217 Program cat found: YES (/usr/bin/cat) 00:03:59.217 Project name: DPDK 00:03:59.217 Project version: 23.11.0 00:03:59.217 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:59.217 C linker for the host machine: cc ld.bfd 2.39-16 00:03:59.217 Host machine cpu family: x86_64 00:03:59.217 Host machine cpu: x86_64 00:03:59.217 Message: ## Building in Developer Mode ## 00:03:59.217 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:59.217 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:59.217 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:59.217 Program python3 found: YES (/usr/bin/python3) 00:03:59.217 Program cat found: YES (/usr/bin/cat) 00:03:59.217 Compiler for C supports arguments -march=native: YES 00:03:59.217 Checking for size of "void *" : 8 00:03:59.217 Checking for size of "void *" : 8 (cached) 00:03:59.217 Library m found: YES 00:03:59.217 Library numa found: YES 00:03:59.217 Has header "numaif.h" : YES 00:03:59.217 Library fdt found: NO 00:03:59.217 Library execinfo found: NO 00:03:59.217 Has header "execinfo.h" : YES 00:03:59.217 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:59.217 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:59.217 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:59.217 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:59.217 Run-time dependency openssl found: YES 3.0.9 00:03:59.217 Run-time dependency libpcap found: YES 1.10.4 00:03:59.217 Has header "pcap.h" with dependency libpcap: YES 00:03:59.217 Compiler for C supports arguments -Wcast-qual: YES 00:03:59.217 Compiler for C supports arguments -Wdeprecated: YES 00:03:59.217 Compiler for C supports arguments -Wformat: YES 00:03:59.218 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:59.218 Compiler for C supports arguments -Wformat-security: NO 00:03:59.218 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:59.218 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:59.218 Compiler for C supports arguments -Wnested-externs: YES 00:03:59.218 Compiler for C supports arguments -Wold-style-definition: YES 00:03:59.218 Compiler for C supports arguments -Wpointer-arith: YES 00:03:59.218 Compiler for C supports arguments -Wsign-compare: YES 00:03:59.218 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:59.218 Compiler for C supports arguments -Wundef: YES 00:03:59.218 Compiler for C supports arguments -Wwrite-strings: YES 00:03:59.218 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:59.218 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:59.218 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:59.218 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:59.218 Program objdump found: YES (/usr/bin/objdump) 00:03:59.218 Compiler for C supports arguments -mavx512f: YES 00:03:59.218 Checking if "AVX512 checking" compiles: YES 00:03:59.218 Fetching value of define "__SSE4_2__" : 1 00:03:59.218 Fetching value of define "__AES__" : 1 00:03:59.218 Fetching value of define "__AVX__" : 1 00:03:59.218 Fetching value of define "__AVX2__" : 1 00:03:59.218 Fetching value of define "__AVX512BW__" : (undefined) 00:03:59.218 Fetching value of define "__AVX512CD__" : (undefined) 00:03:59.218 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:59.218 Fetching value of define "__AVX512F__" : (undefined) 00:03:59.218 Fetching value of define "__AVX512VL__" : (undefined) 00:03:59.218 Fetching value of define "__PCLMUL__" : 1 00:03:59.218 Fetching value of define "__RDRND__" : 1 00:03:59.218 Fetching value of define "__RDSEED__" : 1 00:03:59.218 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:59.218 Fetching value of define "__znver1__" : (undefined) 00:03:59.218 Fetching value of define "__znver2__" : (undefined) 00:03:59.218 Fetching value of define "__znver3__" : (undefined) 00:03:59.218 Fetching value of define "__znver4__" : (undefined) 00:03:59.218 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:59.218 Message: lib/log: Defining dependency "log" 00:03:59.218 Message: lib/kvargs: Defining dependency "kvargs" 00:03:59.218 Message: lib/telemetry: Defining dependency "telemetry" 00:03:59.218 Checking for function "getentropy" : NO 00:03:59.218 Message: lib/eal: Defining dependency "eal" 00:03:59.218 Message: lib/ring: Defining dependency "ring" 00:03:59.218 Message: lib/rcu: Defining dependency "rcu" 00:03:59.218 Message: lib/mempool: Defining dependency "mempool" 00:03:59.218 Message: lib/mbuf: Defining dependency "mbuf" 00:03:59.218 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:59.218 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:59.218 Compiler for C supports arguments -mpclmul: YES 00:03:59.218 Compiler for C supports arguments -maes: YES 00:03:59.218 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:59.218 Compiler for C supports arguments -mavx512bw: YES 00:03:59.218 Compiler for C supports arguments -mavx512dq: YES 00:03:59.218 Compiler for C supports arguments -mavx512vl: YES 00:03:59.218 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:59.218 Compiler for C supports arguments -mavx2: YES 00:03:59.218 Compiler for C supports arguments -mavx: YES 00:03:59.218 Message: lib/net: Defining dependency "net" 00:03:59.218 Message: lib/meter: Defining dependency "meter" 00:03:59.218 Message: lib/ethdev: Defining dependency "ethdev" 00:03:59.218 Message: lib/pci: Defining dependency "pci" 00:03:59.218 Message: lib/cmdline: Defining dependency "cmdline" 00:03:59.218 Message: lib/hash: Defining dependency "hash" 00:03:59.218 Message: lib/timer: Defining dependency "timer" 00:03:59.218 Message: lib/compressdev: Defining dependency "compressdev" 00:03:59.218 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:59.218 Message: lib/dmadev: Defining dependency "dmadev" 00:03:59.218 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:59.218 Message: lib/power: Defining dependency "power" 00:03:59.218 Message: lib/reorder: Defining dependency "reorder" 00:03:59.218 Message: lib/security: Defining dependency "security" 00:03:59.218 Has header "linux/userfaultfd.h" : YES 00:03:59.218 Has header "linux/vduse.h" : YES 00:03:59.218 Message: lib/vhost: Defining dependency "vhost" 00:03:59.218 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:59.218 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:59.218 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:59.218 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:59.218 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:59.218 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:59.218 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:59.218 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:59.218 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:59.218 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:59.218 Program doxygen found: YES (/usr/bin/doxygen) 00:03:59.218 Configuring doxy-api-html.conf using configuration 00:03:59.218 Configuring doxy-api-man.conf using configuration 00:03:59.218 Program mandb found: YES (/usr/bin/mandb) 00:03:59.218 Program sphinx-build found: NO 00:03:59.218 Configuring rte_build_config.h using configuration 00:03:59.218 Message: 00:03:59.218 ================= 00:03:59.218 Applications Enabled 00:03:59.218 ================= 00:03:59.218 00:03:59.218 apps: 00:03:59.218 00:03:59.218 00:03:59.218 Message: 00:03:59.218 ================= 00:03:59.218 Libraries Enabled 00:03:59.218 ================= 00:03:59.218 00:03:59.218 libs: 00:03:59.218 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:59.218 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:59.218 cryptodev, dmadev, power, reorder, security, vhost, 00:03:59.218 00:03:59.218 Message: 00:03:59.218 =============== 00:03:59.218 Drivers Enabled 00:03:59.218 =============== 00:03:59.218 00:03:59.218 common: 00:03:59.218 00:03:59.218 bus: 00:03:59.218 pci, vdev, 00:03:59.218 mempool: 00:03:59.218 ring, 00:03:59.218 dma: 00:03:59.218 00:03:59.218 net: 00:03:59.218 00:03:59.218 crypto: 00:03:59.218 00:03:59.218 compress: 00:03:59.218 00:03:59.218 vdpa: 00:03:59.218 00:03:59.218 00:03:59.218 Message: 00:03:59.218 ================= 00:03:59.218 Content Skipped 00:03:59.218 ================= 00:03:59.218 00:03:59.218 apps: 00:03:59.218 dumpcap: explicitly disabled via build config 00:03:59.218 graph: explicitly disabled via build config 00:03:59.218 pdump: explicitly disabled via build config 00:03:59.218 proc-info: explicitly disabled via build config 00:03:59.218 test-acl: explicitly disabled via build config 00:03:59.218 test-bbdev: explicitly disabled via build config 00:03:59.218 test-cmdline: explicitly disabled via build config 00:03:59.218 test-compress-perf: explicitly disabled via build config 00:03:59.218 test-crypto-perf: explicitly disabled via build config 00:03:59.218 test-dma-perf: explicitly disabled via build config 00:03:59.218 test-eventdev: explicitly disabled via build config 00:03:59.218 test-fib: explicitly disabled via build config 00:03:59.218 test-flow-perf: explicitly disabled via build config 00:03:59.218 test-gpudev: explicitly disabled via build config 00:03:59.218 test-mldev: explicitly disabled via build config 00:03:59.218 test-pipeline: explicitly disabled via build config 00:03:59.218 test-pmd: explicitly disabled via build config 00:03:59.218 test-regex: explicitly disabled via build config 00:03:59.218 test-sad: explicitly disabled via build config 00:03:59.218 test-security-perf: explicitly disabled via build config 00:03:59.218 00:03:59.218 libs: 00:03:59.218 metrics: explicitly disabled via build config 00:03:59.218 acl: explicitly disabled via build config 00:03:59.218 bbdev: explicitly disabled via build config 00:03:59.218 bitratestats: explicitly disabled via build config 00:03:59.218 bpf: explicitly disabled via build config 00:03:59.218 cfgfile: explicitly disabled via build config 00:03:59.218 distributor: explicitly disabled via build config 00:03:59.218 efd: explicitly disabled via build config 00:03:59.218 eventdev: explicitly disabled via build config 00:03:59.218 dispatcher: explicitly disabled via build config 00:03:59.218 gpudev: explicitly disabled via build config 00:03:59.218 gro: explicitly disabled via build config 00:03:59.218 gso: explicitly disabled via build config 00:03:59.218 ip_frag: explicitly disabled via build config 00:03:59.218 jobstats: explicitly disabled via build config 00:03:59.218 latencystats: explicitly disabled via build config 00:03:59.218 lpm: explicitly disabled via build config 00:03:59.218 member: explicitly disabled via build config 00:03:59.218 pcapng: explicitly disabled via build config 00:03:59.218 rawdev: explicitly disabled via build config 00:03:59.218 regexdev: explicitly disabled via build config 00:03:59.218 mldev: explicitly disabled via build config 00:03:59.218 rib: explicitly disabled via build config 00:03:59.218 sched: explicitly disabled via build config 00:03:59.218 stack: explicitly disabled via build config 00:03:59.218 ipsec: explicitly disabled via build config 00:03:59.218 pdcp: explicitly disabled via build config 00:03:59.218 fib: explicitly disabled via build config 00:03:59.218 port: explicitly disabled via build config 00:03:59.218 pdump: explicitly disabled via build config 00:03:59.218 table: explicitly disabled via build config 00:03:59.218 pipeline: explicitly disabled via build config 00:03:59.218 graph: explicitly disabled via build config 00:03:59.218 node: explicitly disabled via build config 00:03:59.218 00:03:59.219 drivers: 00:03:59.219 common/cpt: not in enabled drivers build config 00:03:59.219 common/dpaax: not in enabled drivers build config 00:03:59.219 common/iavf: not in enabled drivers build config 00:03:59.219 common/idpf: not in enabled drivers build config 00:03:59.219 common/mvep: not in enabled drivers build config 00:03:59.219 common/octeontx: not in enabled drivers build config 00:03:59.219 bus/auxiliary: not in enabled drivers build config 00:03:59.219 bus/cdx: not in enabled drivers build config 00:03:59.219 bus/dpaa: not in enabled drivers build config 00:03:59.219 bus/fslmc: not in enabled drivers build config 00:03:59.219 bus/ifpga: not in enabled drivers build config 00:03:59.219 bus/platform: not in enabled drivers build config 00:03:59.219 bus/vmbus: not in enabled drivers build config 00:03:59.219 common/cnxk: not in enabled drivers build config 00:03:59.219 common/mlx5: not in enabled drivers build config 00:03:59.219 common/nfp: not in enabled drivers build config 00:03:59.219 common/qat: not in enabled drivers build config 00:03:59.219 common/sfc_efx: not in enabled drivers build config 00:03:59.219 mempool/bucket: not in enabled drivers build config 00:03:59.219 mempool/cnxk: not in enabled drivers build config 00:03:59.219 mempool/dpaa: not in enabled drivers build config 00:03:59.219 mempool/dpaa2: not in enabled drivers build config 00:03:59.219 mempool/octeontx: not in enabled drivers build config 00:03:59.219 mempool/stack: not in enabled drivers build config 00:03:59.219 dma/cnxk: not in enabled drivers build config 00:03:59.219 dma/dpaa: not in enabled drivers build config 00:03:59.219 dma/dpaa2: not in enabled drivers build config 00:03:59.219 dma/hisilicon: not in enabled drivers build config 00:03:59.219 dma/idxd: not in enabled drivers build config 00:03:59.219 dma/ioat: not in enabled drivers build config 00:03:59.219 dma/skeleton: not in enabled drivers build config 00:03:59.219 net/af_packet: not in enabled drivers build config 00:03:59.219 net/af_xdp: not in enabled drivers build config 00:03:59.219 net/ark: not in enabled drivers build config 00:03:59.219 net/atlantic: not in enabled drivers build config 00:03:59.219 net/avp: not in enabled drivers build config 00:03:59.219 net/axgbe: not in enabled drivers build config 00:03:59.219 net/bnx2x: not in enabled drivers build config 00:03:59.219 net/bnxt: not in enabled drivers build config 00:03:59.219 net/bonding: not in enabled drivers build config 00:03:59.219 net/cnxk: not in enabled drivers build config 00:03:59.219 net/cpfl: not in enabled drivers build config 00:03:59.219 net/cxgbe: not in enabled drivers build config 00:03:59.219 net/dpaa: not in enabled drivers build config 00:03:59.219 net/dpaa2: not in enabled drivers build config 00:03:59.219 net/e1000: not in enabled drivers build config 00:03:59.219 net/ena: not in enabled drivers build config 00:03:59.219 net/enetc: not in enabled drivers build config 00:03:59.219 net/enetfec: not in enabled drivers build config 00:03:59.219 net/enic: not in enabled drivers build config 00:03:59.219 net/failsafe: not in enabled drivers build config 00:03:59.219 net/fm10k: not in enabled drivers build config 00:03:59.219 net/gve: not in enabled drivers build config 00:03:59.219 net/hinic: not in enabled drivers build config 00:03:59.219 net/hns3: not in enabled drivers build config 00:03:59.219 net/i40e: not in enabled drivers build config 00:03:59.219 net/iavf: not in enabled drivers build config 00:03:59.219 net/ice: not in enabled drivers build config 00:03:59.219 net/idpf: not in enabled drivers build config 00:03:59.219 net/igc: not in enabled drivers build config 00:03:59.219 net/ionic: not in enabled drivers build config 00:03:59.219 net/ipn3ke: not in enabled drivers build config 00:03:59.219 net/ixgbe: not in enabled drivers build config 00:03:59.219 net/mana: not in enabled drivers build config 00:03:59.219 net/memif: not in enabled drivers build config 00:03:59.219 net/mlx4: not in enabled drivers build config 00:03:59.219 net/mlx5: not in enabled drivers build config 00:03:59.219 net/mvneta: not in enabled drivers build config 00:03:59.219 net/mvpp2: not in enabled drivers build config 00:03:59.219 net/netvsc: not in enabled drivers build config 00:03:59.219 net/nfb: not in enabled drivers build config 00:03:59.219 net/nfp: not in enabled drivers build config 00:03:59.219 net/ngbe: not in enabled drivers build config 00:03:59.219 net/null: not in enabled drivers build config 00:03:59.219 net/octeontx: not in enabled drivers build config 00:03:59.219 net/octeon_ep: not in enabled drivers build config 00:03:59.219 net/pcap: not in enabled drivers build config 00:03:59.219 net/pfe: not in enabled drivers build config 00:03:59.219 net/qede: not in enabled drivers build config 00:03:59.219 net/ring: not in enabled drivers build config 00:03:59.219 net/sfc: not in enabled drivers build config 00:03:59.219 net/softnic: not in enabled drivers build config 00:03:59.219 net/tap: not in enabled drivers build config 00:03:59.219 net/thunderx: not in enabled drivers build config 00:03:59.219 net/txgbe: not in enabled drivers build config 00:03:59.219 net/vdev_netvsc: not in enabled drivers build config 00:03:59.219 net/vhost: not in enabled drivers build config 00:03:59.219 net/virtio: not in enabled drivers build config 00:03:59.219 net/vmxnet3: not in enabled drivers build config 00:03:59.219 raw/*: missing internal dependency, "rawdev" 00:03:59.219 crypto/armv8: not in enabled drivers build config 00:03:59.219 crypto/bcmfs: not in enabled drivers build config 00:03:59.219 crypto/caam_jr: not in enabled drivers build config 00:03:59.219 crypto/ccp: not in enabled drivers build config 00:03:59.219 crypto/cnxk: not in enabled drivers build config 00:03:59.219 crypto/dpaa_sec: not in enabled drivers build config 00:03:59.219 crypto/dpaa2_sec: not in enabled drivers build config 00:03:59.219 crypto/ipsec_mb: not in enabled drivers build config 00:03:59.219 crypto/mlx5: not in enabled drivers build config 00:03:59.219 crypto/mvsam: not in enabled drivers build config 00:03:59.219 crypto/nitrox: not in enabled drivers build config 00:03:59.219 crypto/null: not in enabled drivers build config 00:03:59.219 crypto/octeontx: not in enabled drivers build config 00:03:59.219 crypto/openssl: not in enabled drivers build config 00:03:59.219 crypto/scheduler: not in enabled drivers build config 00:03:59.219 crypto/uadk: not in enabled drivers build config 00:03:59.219 crypto/virtio: not in enabled drivers build config 00:03:59.219 compress/isal: not in enabled drivers build config 00:03:59.219 compress/mlx5: not in enabled drivers build config 00:03:59.219 compress/octeontx: not in enabled drivers build config 00:03:59.219 compress/zlib: not in enabled drivers build config 00:03:59.219 regex/*: missing internal dependency, "regexdev" 00:03:59.219 ml/*: missing internal dependency, "mldev" 00:03:59.219 vdpa/ifc: not in enabled drivers build config 00:03:59.219 vdpa/mlx5: not in enabled drivers build config 00:03:59.219 vdpa/nfp: not in enabled drivers build config 00:03:59.219 vdpa/sfc: not in enabled drivers build config 00:03:59.219 event/*: missing internal dependency, "eventdev" 00:03:59.219 baseband/*: missing internal dependency, "bbdev" 00:03:59.219 gpu/*: missing internal dependency, "gpudev" 00:03:59.219 00:03:59.219 00:03:59.476 Build targets in project: 85 00:03:59.476 00:03:59.476 DPDK 23.11.0 00:03:59.476 00:03:59.476 User defined options 00:03:59.476 buildtype : debug 00:03:59.476 default_library : shared 00:03:59.476 libdir : lib 00:03:59.476 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:59.476 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:59.476 c_link_args : 00:03:59.476 cpu_instruction_set: native 00:03:59.476 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:59.476 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:59.476 enable_docs : false 00:03:59.476 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:59.476 enable_kmods : false 00:03:59.476 tests : false 00:03:59.476 00:03:59.476 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:00.041 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:04:00.041 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:00.299 [2/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:00.299 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:00.299 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:00.299 [5/265] Linking static target lib/librte_kvargs.a 00:04:00.299 [6/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:00.299 [7/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:00.299 [8/265] Linking static target lib/librte_log.a 00:04:00.299 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:00.299 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:00.863 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:00.863 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:00.863 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:01.120 [14/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:01.120 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:01.120 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:01.120 [17/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:01.120 [18/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.120 [19/265] Linking static target lib/librte_telemetry.a 00:04:01.377 [20/265] Linking target lib/librte_log.so.24.0 00:04:01.377 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:01.634 [22/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:04:01.634 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:01.634 [24/265] Linking target lib/librte_kvargs.so.24.0 00:04:01.634 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:01.634 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:01.891 [27/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:04:01.891 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:01.891 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:01.891 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:01.891 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:01.891 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:02.149 [33/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:02.149 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:02.149 [35/265] Linking target lib/librte_telemetry.so.24.0 00:04:02.714 [36/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:04:02.714 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:02.714 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:02.715 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:02.715 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:02.715 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:02.973 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:02.973 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:02.973 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:02.973 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:02.973 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:02.973 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:02.973 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:03.230 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:03.230 [50/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:03.487 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:03.745 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:03.745 [53/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:03.745 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:03.745 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:04.003 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:04.003 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:04.003 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:04.003 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:04.003 [60/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:04.003 [61/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:04.003 [62/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:04.003 [63/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:04.572 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:04.572 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:04.844 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:04.844 [67/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:04.844 [68/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:05.102 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:05.102 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:05.102 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:05.102 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:05.102 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:05.102 [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:05.102 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:05.361 [76/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:05.361 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:05.361 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:05.926 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:05.926 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:05.926 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:06.184 [82/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:06.184 [83/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:06.184 [84/265] Linking static target lib/librte_ring.a 00:04:06.442 [85/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:06.442 [86/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:06.700 [87/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:06.700 [88/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:06.700 [89/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:06.700 [90/265] Linking static target lib/librte_eal.a 00:04:06.700 [91/265] Linking static target lib/librte_rcu.a 00:04:06.700 [92/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:06.700 [93/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:06.958 [94/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:06.958 [95/265] Linking static target lib/librte_mempool.a 00:04:07.217 [96/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:07.217 [97/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:07.217 [98/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:07.217 [99/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:07.475 [100/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:07.475 [101/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:07.475 [102/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:07.734 [103/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:07.734 [104/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:07.734 [105/265] Linking static target lib/librte_mbuf.a 00:04:07.993 [106/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:07.993 [107/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:07.993 [108/265] Linking static target lib/librte_meter.a 00:04:07.993 [109/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:08.250 [110/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:08.251 [111/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.251 [112/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:08.251 [113/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:08.509 [114/265] Linking static target lib/librte_net.a 00:04:08.509 [115/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.509 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:08.767 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:09.026 [118/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.026 [119/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.283 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:09.847 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:09.847 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:09.847 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:09.847 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:09.847 [125/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:09.847 [126/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:09.847 [127/265] Linking static target lib/librte_pci.a 00:04:09.847 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:10.105 [129/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:10.105 [130/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:10.105 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:10.105 [132/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:10.363 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:10.363 [134/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:10.363 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:10.363 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:10.363 [137/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:10.363 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:10.363 [139/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:10.363 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:10.363 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:10.363 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:10.621 [143/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:10.879 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:10.879 [145/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:11.136 [146/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:11.136 [147/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:11.136 [148/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:11.136 [149/265] Linking static target lib/librte_ethdev.a 00:04:11.136 [150/265] Linking static target lib/librte_cmdline.a 00:04:11.136 [151/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:11.409 [152/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:11.409 [153/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:11.666 [154/265] Linking static target lib/librte_timer.a 00:04:11.666 [155/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:11.666 [156/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:11.666 [157/265] Linking static target lib/librte_hash.a 00:04:11.666 [158/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:11.666 [159/265] Linking static target lib/librte_compressdev.a 00:04:12.233 [160/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:12.233 [161/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:12.233 [162/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:12.233 [163/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:12.233 [164/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:12.233 [165/265] Linking static target lib/librte_dmadev.a 00:04:12.492 [166/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:12.750 [167/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:13.008 [168/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:13.008 [169/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:13.008 [170/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:13.008 [171/265] Linking static target lib/librte_cryptodev.a 00:04:13.008 [172/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:13.008 [173/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:13.008 [174/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:13.267 [175/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:13.267 [176/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:13.525 [177/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:13.525 [178/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:13.840 [179/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:13.840 [180/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:13.840 [181/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:14.098 [182/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:14.098 [183/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:14.098 [184/265] Linking static target lib/librte_reorder.a 00:04:14.098 [185/265] Linking static target lib/librte_power.a 00:04:14.356 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:14.356 [187/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:14.356 [188/265] Linking static target lib/librte_security.a 00:04:14.615 [189/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:14.615 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:14.615 [191/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:15.181 [192/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:15.440 [193/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:15.440 [194/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:15.440 [195/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:15.440 [196/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:15.699 [197/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:15.959 [198/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:15.959 [199/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:15.959 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:15.959 [201/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:16.218 [202/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:16.218 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:16.476 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:16.476 [205/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:16.734 [206/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:16.734 [207/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:16.734 [208/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:16.734 [209/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:16.734 [210/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:16.734 [211/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:16.993 [212/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:16.993 [213/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:16.993 [214/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:16.993 [215/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:16.993 [216/265] Linking static target drivers/librte_bus_vdev.a 00:04:16.993 [217/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:16.993 [218/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:16.993 [219/265] Linking static target drivers/librte_bus_pci.a 00:04:16.993 [220/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:16.993 [221/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:16.993 [222/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:16.993 [223/265] Linking static target drivers/librte_mempool_ring.a 00:04:17.252 [224/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:17.819 [225/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.753 [226/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:18.753 [227/265] Linking static target lib/librte_vhost.a 00:04:19.372 [228/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:19.372 [229/265] Linking target lib/librte_eal.so.24.0 00:04:19.372 [230/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:04:19.630 [231/265] Linking target lib/librte_ring.so.24.0 00:04:19.630 [232/265] Linking target lib/librte_pci.so.24.0 00:04:19.630 [233/265] Linking target drivers/librte_bus_vdev.so.24.0 00:04:19.630 [234/265] Linking target lib/librte_meter.so.24.0 00:04:19.630 [235/265] Linking target lib/librte_timer.so.24.0 00:04:19.630 [236/265] Linking target lib/librte_dmadev.so.24.0 00:04:19.630 [237/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:04:19.630 [238/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:04:19.630 [239/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:04:19.630 [240/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:04:19.630 [241/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:04:19.630 [242/265] Linking target drivers/librte_bus_pci.so.24.0 00:04:19.630 [243/265] Linking target lib/librte_mempool.so.24.0 00:04:19.630 [244/265] Linking target lib/librte_rcu.so.24.0 00:04:19.888 [245/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:19.888 [246/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:04:19.888 [247/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:04:19.888 [248/265] Linking target drivers/librte_mempool_ring.so.24.0 00:04:19.888 [249/265] Linking target lib/librte_mbuf.so.24.0 00:04:20.146 [250/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:04:20.146 [251/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:20.146 [252/265] Linking target lib/librte_compressdev.so.24.0 00:04:20.147 [253/265] Linking target lib/librte_reorder.so.24.0 00:04:20.147 [254/265] Linking target lib/librte_cryptodev.so.24.0 00:04:20.147 [255/265] Linking target lib/librte_net.so.24.0 00:04:20.408 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:04:20.408 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:04:20.408 [258/265] Linking target lib/librte_hash.so.24.0 00:04:20.408 [259/265] Linking target lib/librte_cmdline.so.24.0 00:04:20.408 [260/265] Linking target lib/librte_security.so.24.0 00:04:20.408 [261/265] Linking target lib/librte_ethdev.so.24.0 00:04:20.670 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:04:20.670 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:04:20.670 [264/265] Linking target lib/librte_power.so.24.0 00:04:20.670 [265/265] Linking target lib/librte_vhost.so.24.0 00:04:20.670 INFO: autodetecting backend as ninja 00:04:20.670 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:22.045 CC lib/ut_mock/mock.o 00:04:22.045 CC lib/log/log.o 00:04:22.045 CC lib/log/log_flags.o 00:04:22.045 CC lib/log/log_deprecated.o 00:04:22.045 CC lib/ut/ut.o 00:04:22.045 LIB libspdk_ut_mock.a 00:04:22.303 LIB libspdk_log.a 00:04:22.303 LIB libspdk_ut.a 00:04:22.303 SO libspdk_ut_mock.so.6.0 00:04:22.303 SO libspdk_ut.so.2.0 00:04:22.303 SO libspdk_log.so.7.0 00:04:22.303 SYMLINK libspdk_ut_mock.so 00:04:22.303 SYMLINK libspdk_ut.so 00:04:22.303 SYMLINK libspdk_log.so 00:04:22.561 CC lib/dma/dma.o 00:04:22.561 CC lib/ioat/ioat.o 00:04:22.561 CC lib/util/base64.o 00:04:22.561 CC lib/util/bit_array.o 00:04:22.561 CC lib/util/cpuset.o 00:04:22.561 CC lib/util/crc16.o 00:04:22.561 CC lib/util/crc32.o 00:04:22.561 CXX lib/trace_parser/trace.o 00:04:22.561 CC lib/util/crc32c.o 00:04:22.561 CC lib/vfio_user/host/vfio_user_pci.o 00:04:22.819 CC lib/util/crc32_ieee.o 00:04:22.819 CC lib/util/crc64.o 00:04:22.819 CC lib/util/dif.o 00:04:22.819 CC lib/util/fd.o 00:04:22.819 CC lib/util/file.o 00:04:22.819 LIB libspdk_ioat.a 00:04:22.819 CC lib/util/hexlify.o 00:04:23.076 LIB libspdk_dma.a 00:04:23.076 SO libspdk_ioat.so.7.0 00:04:23.076 SO libspdk_dma.so.4.0 00:04:23.076 CC lib/vfio_user/host/vfio_user.o 00:04:23.076 SYMLINK libspdk_ioat.so 00:04:23.076 CC lib/util/iov.o 00:04:23.076 CC lib/util/math.o 00:04:23.076 CC lib/util/pipe.o 00:04:23.076 CC lib/util/strerror_tls.o 00:04:23.076 CC lib/util/string.o 00:04:23.076 SYMLINK libspdk_dma.so 00:04:23.076 CC lib/util/uuid.o 00:04:23.076 CC lib/util/fd_group.o 00:04:23.334 CC lib/util/xor.o 00:04:23.334 CC lib/util/zipf.o 00:04:23.334 LIB libspdk_vfio_user.a 00:04:23.334 SO libspdk_vfio_user.so.5.0 00:04:23.334 SYMLINK libspdk_vfio_user.so 00:04:23.334 LIB libspdk_util.a 00:04:23.592 SO libspdk_util.so.9.0 00:04:23.592 LIB libspdk_trace_parser.a 00:04:23.592 SYMLINK libspdk_util.so 00:04:23.850 SO libspdk_trace_parser.so.5.0 00:04:23.850 SYMLINK libspdk_trace_parser.so 00:04:23.850 CC lib/json/json_parse.o 00:04:23.850 CC lib/json/json_util.o 00:04:23.850 CC lib/rdma/rdma_verbs.o 00:04:23.850 CC lib/rdma/common.o 00:04:23.850 CC lib/json/json_write.o 00:04:23.850 CC lib/idxd/idxd.o 00:04:23.850 CC lib/idxd/idxd_user.o 00:04:23.850 CC lib/env_dpdk/env.o 00:04:23.850 CC lib/conf/conf.o 00:04:23.850 CC lib/vmd/vmd.o 00:04:24.108 CC lib/vmd/led.o 00:04:24.108 CC lib/env_dpdk/memory.o 00:04:24.108 CC lib/env_dpdk/pci.o 00:04:24.108 LIB libspdk_rdma.a 00:04:24.108 CC lib/env_dpdk/init.o 00:04:24.108 SO libspdk_rdma.so.6.0 00:04:24.366 LIB libspdk_conf.a 00:04:24.366 LIB libspdk_json.a 00:04:24.366 SYMLINK libspdk_rdma.so 00:04:24.366 CC lib/env_dpdk/threads.o 00:04:24.366 CC lib/env_dpdk/pci_ioat.o 00:04:24.366 SO libspdk_conf.so.6.0 00:04:24.366 SO libspdk_json.so.6.0 00:04:24.366 SYMLINK libspdk_conf.so 00:04:24.366 SYMLINK libspdk_json.so 00:04:24.366 CC lib/env_dpdk/pci_virtio.o 00:04:24.366 CC lib/env_dpdk/pci_vmd.o 00:04:24.366 CC lib/env_dpdk/pci_idxd.o 00:04:24.623 CC lib/env_dpdk/pci_event.o 00:04:24.623 CC lib/env_dpdk/sigbus_handler.o 00:04:24.623 CC lib/env_dpdk/pci_dpdk.o 00:04:24.623 LIB libspdk_idxd.a 00:04:24.623 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:24.623 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:24.623 SO libspdk_idxd.so.12.0 00:04:24.623 LIB libspdk_vmd.a 00:04:24.623 SYMLINK libspdk_idxd.so 00:04:24.623 SO libspdk_vmd.so.6.0 00:04:24.623 SYMLINK libspdk_vmd.so 00:04:24.882 CC lib/jsonrpc/jsonrpc_server.o 00:04:24.882 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:24.882 CC lib/jsonrpc/jsonrpc_client.o 00:04:24.882 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:25.140 LIB libspdk_jsonrpc.a 00:04:25.140 SO libspdk_jsonrpc.so.6.0 00:04:25.140 SYMLINK libspdk_jsonrpc.so 00:04:25.398 LIB libspdk_env_dpdk.a 00:04:25.398 CC lib/rpc/rpc.o 00:04:25.398 SO libspdk_env_dpdk.so.14.0 00:04:25.656 SYMLINK libspdk_env_dpdk.so 00:04:25.656 LIB libspdk_rpc.a 00:04:25.656 SO libspdk_rpc.so.6.0 00:04:25.656 SYMLINK libspdk_rpc.so 00:04:25.914 CC lib/notify/notify_rpc.o 00:04:25.914 CC lib/notify/notify.o 00:04:25.914 CC lib/trace/trace.o 00:04:25.914 CC lib/trace/trace_rpc.o 00:04:25.914 CC lib/trace/trace_flags.o 00:04:25.914 CC lib/keyring/keyring.o 00:04:25.914 CC lib/keyring/keyring_rpc.o 00:04:26.172 LIB libspdk_notify.a 00:04:26.172 SO libspdk_notify.so.6.0 00:04:26.172 LIB libspdk_keyring.a 00:04:26.172 LIB libspdk_trace.a 00:04:26.172 SYMLINK libspdk_notify.so 00:04:26.172 SO libspdk_keyring.so.1.0 00:04:26.172 SO libspdk_trace.so.10.0 00:04:26.430 SYMLINK libspdk_keyring.so 00:04:26.430 SYMLINK libspdk_trace.so 00:04:26.688 CC lib/thread/thread.o 00:04:26.688 CC lib/thread/iobuf.o 00:04:26.688 CC lib/sock/sock.o 00:04:26.688 CC lib/sock/sock_rpc.o 00:04:26.990 LIB libspdk_sock.a 00:04:26.990 SO libspdk_sock.so.9.0 00:04:27.250 SYMLINK libspdk_sock.so 00:04:27.507 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:27.507 CC lib/nvme/nvme_ctrlr.o 00:04:27.507 CC lib/nvme/nvme_fabric.o 00:04:27.507 CC lib/nvme/nvme_ns_cmd.o 00:04:27.507 CC lib/nvme/nvme_ns.o 00:04:27.507 CC lib/nvme/nvme_pcie_common.o 00:04:27.507 CC lib/nvme/nvme.o 00:04:27.507 CC lib/nvme/nvme_pcie.o 00:04:27.507 CC lib/nvme/nvme_qpair.o 00:04:28.072 CC lib/nvme/nvme_quirks.o 00:04:28.072 CC lib/nvme/nvme_transport.o 00:04:28.072 LIB libspdk_thread.a 00:04:28.330 SO libspdk_thread.so.10.0 00:04:28.330 SYMLINK libspdk_thread.so 00:04:28.330 CC lib/nvme/nvme_discovery.o 00:04:28.330 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:28.330 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:28.330 CC lib/nvme/nvme_tcp.o 00:04:28.330 CC lib/nvme/nvme_opal.o 00:04:28.330 CC lib/nvme/nvme_io_msg.o 00:04:28.895 CC lib/nvme/nvme_poll_group.o 00:04:28.895 CC lib/accel/accel.o 00:04:28.895 CC lib/nvme/nvme_zns.o 00:04:28.895 CC lib/nvme/nvme_stubs.o 00:04:29.154 CC lib/blob/blobstore.o 00:04:29.154 CC lib/blob/request.o 00:04:29.154 CC lib/virtio/virtio.o 00:04:29.154 CC lib/init/json_config.o 00:04:29.411 CC lib/virtio/virtio_vhost_user.o 00:04:29.411 CC lib/blob/zeroes.o 00:04:29.411 CC lib/init/subsystem.o 00:04:29.669 CC lib/blob/blob_bs_dev.o 00:04:29.669 CC lib/accel/accel_rpc.o 00:04:29.669 CC lib/accel/accel_sw.o 00:04:29.669 CC lib/nvme/nvme_auth.o 00:04:29.669 CC lib/virtio/virtio_vfio_user.o 00:04:29.669 CC lib/nvme/nvme_cuse.o 00:04:29.927 CC lib/nvme/nvme_rdma.o 00:04:29.927 CC lib/virtio/virtio_pci.o 00:04:29.927 CC lib/init/subsystem_rpc.o 00:04:29.927 CC lib/init/rpc.o 00:04:29.927 LIB libspdk_accel.a 00:04:30.185 SO libspdk_accel.so.15.0 00:04:30.185 LIB libspdk_virtio.a 00:04:30.185 SYMLINK libspdk_accel.so 00:04:30.185 LIB libspdk_init.a 00:04:30.185 SO libspdk_virtio.so.7.0 00:04:30.185 SO libspdk_init.so.5.0 00:04:30.185 SYMLINK libspdk_init.so 00:04:30.185 SYMLINK libspdk_virtio.so 00:04:30.466 CC lib/bdev/bdev.o 00:04:30.466 CC lib/bdev/bdev_rpc.o 00:04:30.466 CC lib/bdev/part.o 00:04:30.466 CC lib/bdev/bdev_zone.o 00:04:30.466 CC lib/bdev/scsi_nvme.o 00:04:30.466 CC lib/event/app.o 00:04:30.742 CC lib/event/reactor.o 00:04:30.742 CC lib/event/log_rpc.o 00:04:30.742 CC lib/event/app_rpc.o 00:04:30.742 CC lib/event/scheduler_static.o 00:04:31.000 LIB libspdk_event.a 00:04:31.000 SO libspdk_event.so.13.0 00:04:31.258 SYMLINK libspdk_event.so 00:04:31.258 LIB libspdk_nvme.a 00:04:31.515 SO libspdk_nvme.so.13.0 00:04:31.773 SYMLINK libspdk_nvme.so 00:04:32.031 LIB libspdk_blob.a 00:04:32.031 SO libspdk_blob.so.11.0 00:04:32.031 SYMLINK libspdk_blob.so 00:04:32.289 CC lib/blobfs/blobfs.o 00:04:32.289 CC lib/blobfs/tree.o 00:04:32.289 CC lib/lvol/lvol.o 00:04:33.223 LIB libspdk_bdev.a 00:04:33.223 SO libspdk_bdev.so.15.0 00:04:33.223 LIB libspdk_lvol.a 00:04:33.223 LIB libspdk_blobfs.a 00:04:33.223 SO libspdk_lvol.so.10.0 00:04:33.223 SYMLINK libspdk_bdev.so 00:04:33.223 SO libspdk_blobfs.so.10.0 00:04:33.223 SYMLINK libspdk_lvol.so 00:04:33.481 SYMLINK libspdk_blobfs.so 00:04:33.481 CC lib/ftl/ftl_core.o 00:04:33.481 CC lib/ftl/ftl_init.o 00:04:33.481 CC lib/ftl/ftl_layout.o 00:04:33.481 CC lib/ftl/ftl_io.o 00:04:33.481 CC lib/ftl/ftl_debug.o 00:04:33.481 CC lib/ftl/ftl_sb.o 00:04:33.481 CC lib/nbd/nbd.o 00:04:33.481 CC lib/nvmf/ctrlr.o 00:04:33.481 CC lib/ublk/ublk.o 00:04:33.481 CC lib/scsi/dev.o 00:04:33.739 CC lib/ftl/ftl_l2p.o 00:04:33.739 CC lib/ftl/ftl_l2p_flat.o 00:04:33.739 CC lib/ublk/ublk_rpc.o 00:04:33.739 CC lib/scsi/lun.o 00:04:33.739 CC lib/scsi/port.o 00:04:33.739 CC lib/scsi/scsi.o 00:04:33.998 CC lib/ftl/ftl_nv_cache.o 00:04:33.998 CC lib/nbd/nbd_rpc.o 00:04:33.998 CC lib/ftl/ftl_band.o 00:04:33.998 CC lib/ftl/ftl_band_ops.o 00:04:33.998 CC lib/ftl/ftl_writer.o 00:04:33.998 CC lib/ftl/ftl_rq.o 00:04:33.998 LIB libspdk_nbd.a 00:04:34.321 CC lib/ftl/ftl_reloc.o 00:04:34.321 CC lib/scsi/scsi_bdev.o 00:04:34.321 SO libspdk_nbd.so.7.0 00:04:34.321 LIB libspdk_ublk.a 00:04:34.321 CC lib/nvmf/ctrlr_discovery.o 00:04:34.321 SO libspdk_ublk.so.3.0 00:04:34.321 CC lib/ftl/ftl_l2p_cache.o 00:04:34.321 SYMLINK libspdk_nbd.so 00:04:34.321 CC lib/ftl/ftl_p2l.o 00:04:34.321 SYMLINK libspdk_ublk.so 00:04:34.321 CC lib/ftl/mngt/ftl_mngt.o 00:04:34.579 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:34.579 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:34.579 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:34.579 CC lib/scsi/scsi_pr.o 00:04:34.579 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:34.579 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:34.579 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:34.579 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:34.838 CC lib/scsi/scsi_rpc.o 00:04:34.838 CC lib/nvmf/ctrlr_bdev.o 00:04:34.838 CC lib/nvmf/subsystem.o 00:04:34.838 CC lib/nvmf/nvmf.o 00:04:34.838 CC lib/scsi/task.o 00:04:34.838 CC lib/nvmf/nvmf_rpc.o 00:04:34.838 CC lib/nvmf/transport.o 00:04:34.838 CC lib/nvmf/tcp.o 00:04:35.096 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:35.096 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:35.096 LIB libspdk_scsi.a 00:04:35.096 SO libspdk_scsi.so.9.0 00:04:35.354 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:35.354 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:35.354 SYMLINK libspdk_scsi.so 00:04:35.354 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:35.354 CC lib/ftl/utils/ftl_conf.o 00:04:35.612 CC lib/nvmf/rdma.o 00:04:35.612 CC lib/ftl/utils/ftl_md.o 00:04:35.612 CC lib/ftl/utils/ftl_mempool.o 00:04:35.612 CC lib/ftl/utils/ftl_bitmap.o 00:04:35.871 CC lib/ftl/utils/ftl_property.o 00:04:35.871 CC lib/iscsi/conn.o 00:04:35.871 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:35.871 CC lib/iscsi/init_grp.o 00:04:35.871 CC lib/vhost/vhost.o 00:04:35.871 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:36.129 CC lib/vhost/vhost_rpc.o 00:04:36.129 CC lib/vhost/vhost_scsi.o 00:04:36.129 CC lib/vhost/vhost_blk.o 00:04:36.129 CC lib/iscsi/iscsi.o 00:04:36.129 CC lib/iscsi/md5.o 00:04:36.129 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:36.387 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:36.387 CC lib/vhost/rte_vhost_user.o 00:04:36.387 CC lib/iscsi/param.o 00:04:36.645 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:36.645 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:36.645 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:36.645 CC lib/iscsi/portal_grp.o 00:04:36.903 CC lib/iscsi/tgt_node.o 00:04:36.903 CC lib/iscsi/iscsi_subsystem.o 00:04:36.903 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:36.903 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:37.161 CC lib/iscsi/iscsi_rpc.o 00:04:37.161 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:37.161 CC lib/ftl/base/ftl_base_dev.o 00:04:37.161 CC lib/iscsi/task.o 00:04:37.161 CC lib/ftl/base/ftl_base_bdev.o 00:04:37.161 CC lib/ftl/ftl_trace.o 00:04:37.726 LIB libspdk_ftl.a 00:04:37.726 LIB libspdk_vhost.a 00:04:37.726 LIB libspdk_nvmf.a 00:04:37.726 SO libspdk_vhost.so.8.0 00:04:37.726 SO libspdk_ftl.so.9.0 00:04:37.726 SO libspdk_nvmf.so.18.0 00:04:37.726 SYMLINK libspdk_vhost.so 00:04:37.726 LIB libspdk_iscsi.a 00:04:37.992 SO libspdk_iscsi.so.8.0 00:04:37.992 SYMLINK libspdk_nvmf.so 00:04:38.250 SYMLINK libspdk_iscsi.so 00:04:38.250 SYMLINK libspdk_ftl.so 00:04:38.508 CC module/env_dpdk/env_dpdk_rpc.o 00:04:38.508 CC module/accel/dsa/accel_dsa.o 00:04:38.508 CC module/keyring/file/keyring.o 00:04:38.508 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:38.508 CC module/accel/error/accel_error.o 00:04:38.508 CC module/accel/iaa/accel_iaa.o 00:04:38.509 CC module/sock/posix/posix.o 00:04:38.509 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:38.509 CC module/accel/ioat/accel_ioat.o 00:04:38.766 CC module/blob/bdev/blob_bdev.o 00:04:38.766 LIB libspdk_env_dpdk_rpc.a 00:04:38.766 SO libspdk_env_dpdk_rpc.so.6.0 00:04:38.766 SYMLINK libspdk_env_dpdk_rpc.so 00:04:38.766 CC module/accel/error/accel_error_rpc.o 00:04:38.766 LIB libspdk_scheduler_dynamic.a 00:04:38.766 CC module/accel/ioat/accel_ioat_rpc.o 00:04:38.766 CC module/keyring/file/keyring_rpc.o 00:04:38.766 SO libspdk_scheduler_dynamic.so.4.0 00:04:38.766 LIB libspdk_scheduler_dpdk_governor.a 00:04:38.766 CC module/accel/dsa/accel_dsa_rpc.o 00:04:38.766 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:39.023 CC module/accel/iaa/accel_iaa_rpc.o 00:04:39.023 SYMLINK libspdk_scheduler_dynamic.so 00:04:39.023 LIB libspdk_blob_bdev.a 00:04:39.023 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:39.023 LIB libspdk_accel_error.a 00:04:39.023 LIB libspdk_accel_ioat.a 00:04:39.023 SO libspdk_blob_bdev.so.11.0 00:04:39.023 LIB libspdk_keyring_file.a 00:04:39.023 SO libspdk_accel_error.so.2.0 00:04:39.023 CC module/scheduler/gscheduler/gscheduler.o 00:04:39.023 SO libspdk_accel_ioat.so.6.0 00:04:39.023 SO libspdk_keyring_file.so.1.0 00:04:39.023 SYMLINK libspdk_blob_bdev.so 00:04:39.023 LIB libspdk_accel_iaa.a 00:04:39.023 LIB libspdk_accel_dsa.a 00:04:39.023 SYMLINK libspdk_accel_error.so 00:04:39.023 SO libspdk_accel_iaa.so.3.0 00:04:39.023 SYMLINK libspdk_accel_ioat.so 00:04:39.023 SYMLINK libspdk_keyring_file.so 00:04:39.023 SO libspdk_accel_dsa.so.5.0 00:04:39.281 LIB libspdk_scheduler_gscheduler.a 00:04:39.281 SYMLINK libspdk_accel_iaa.so 00:04:39.281 SO libspdk_scheduler_gscheduler.so.4.0 00:04:39.281 SYMLINK libspdk_accel_dsa.so 00:04:39.281 SYMLINK libspdk_scheduler_gscheduler.so 00:04:39.539 CC module/bdev/error/vbdev_error.o 00:04:39.539 CC module/blobfs/bdev/blobfs_bdev.o 00:04:39.539 CC module/bdev/null/bdev_null.o 00:04:39.539 CC module/bdev/gpt/gpt.o 00:04:39.539 CC module/bdev/delay/vbdev_delay.o 00:04:39.539 CC module/bdev/malloc/bdev_malloc.o 00:04:39.539 CC module/bdev/lvol/vbdev_lvol.o 00:04:39.539 CC module/bdev/nvme/bdev_nvme.o 00:04:39.539 CC module/bdev/passthru/vbdev_passthru.o 00:04:39.797 CC module/bdev/null/bdev_null_rpc.o 00:04:39.797 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:39.797 CC module/bdev/gpt/vbdev_gpt.o 00:04:39.797 LIB libspdk_sock_posix.a 00:04:39.797 SO libspdk_sock_posix.so.6.0 00:04:39.797 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:39.797 LIB libspdk_bdev_null.a 00:04:39.797 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:39.797 CC module/bdev/error/vbdev_error_rpc.o 00:04:39.797 SO libspdk_bdev_null.so.6.0 00:04:40.054 SYMLINK libspdk_sock_posix.so 00:04:40.054 SYMLINK libspdk_bdev_null.so 00:04:40.054 LIB libspdk_blobfs_bdev.a 00:04:40.054 LIB libspdk_bdev_passthru.a 00:04:40.054 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:40.054 SO libspdk_bdev_passthru.so.6.0 00:04:40.054 SO libspdk_blobfs_bdev.so.6.0 00:04:40.054 CC module/bdev/raid/bdev_raid.o 00:04:40.054 LIB libspdk_bdev_gpt.a 00:04:40.054 LIB libspdk_bdev_delay.a 00:04:40.311 LIB libspdk_bdev_error.a 00:04:40.311 SYMLINK libspdk_blobfs_bdev.so 00:04:40.311 CC module/bdev/split/vbdev_split.o 00:04:40.311 SYMLINK libspdk_bdev_passthru.so 00:04:40.311 CC module/bdev/raid/bdev_raid_rpc.o 00:04:40.311 SO libspdk_bdev_gpt.so.6.0 00:04:40.311 SO libspdk_bdev_delay.so.6.0 00:04:40.311 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:40.311 SO libspdk_bdev_error.so.6.0 00:04:40.311 SYMLINK libspdk_bdev_delay.so 00:04:40.311 CC module/bdev/split/vbdev_split_rpc.o 00:04:40.311 SYMLINK libspdk_bdev_gpt.so 00:04:40.311 CC module/bdev/raid/bdev_raid_sb.o 00:04:40.311 LIB libspdk_bdev_malloc.a 00:04:40.311 SYMLINK libspdk_bdev_error.so 00:04:40.569 SO libspdk_bdev_malloc.so.6.0 00:04:40.569 SYMLINK libspdk_bdev_malloc.so 00:04:40.569 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:40.569 CC module/bdev/raid/raid0.o 00:04:40.569 LIB libspdk_bdev_split.a 00:04:40.569 CC module/bdev/aio/bdev_aio.o 00:04:40.569 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:40.826 SO libspdk_bdev_split.so.6.0 00:04:40.826 CC module/bdev/ftl/bdev_ftl.o 00:04:40.826 CC module/bdev/iscsi/bdev_iscsi.o 00:04:40.826 LIB libspdk_bdev_lvol.a 00:04:40.826 SYMLINK libspdk_bdev_split.so 00:04:40.826 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:40.826 SO libspdk_bdev_lvol.so.6.0 00:04:41.084 SYMLINK libspdk_bdev_lvol.so 00:04:41.084 CC module/bdev/nvme/nvme_rpc.o 00:04:41.084 CC module/bdev/aio/bdev_aio_rpc.o 00:04:41.084 CC module/bdev/nvme/bdev_mdns_client.o 00:04:41.084 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:41.084 CC module/bdev/raid/raid1.o 00:04:41.084 LIB libspdk_bdev_ftl.a 00:04:41.342 LIB libspdk_bdev_aio.a 00:04:41.342 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:41.342 SO libspdk_bdev_ftl.so.6.0 00:04:41.342 SO libspdk_bdev_aio.so.6.0 00:04:41.342 SYMLINK libspdk_bdev_ftl.so 00:04:41.342 CC module/bdev/raid/concat.o 00:04:41.342 SYMLINK libspdk_bdev_aio.so 00:04:41.342 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:41.342 CC module/bdev/nvme/vbdev_opal.o 00:04:41.342 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:41.342 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:41.342 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:41.342 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:41.342 LIB libspdk_bdev_zone_block.a 00:04:41.599 LIB libspdk_bdev_iscsi.a 00:04:41.599 SO libspdk_bdev_zone_block.so.6.0 00:04:41.599 SO libspdk_bdev_iscsi.so.6.0 00:04:41.599 LIB libspdk_bdev_raid.a 00:04:41.599 SYMLINK libspdk_bdev_zone_block.so 00:04:41.600 SYMLINK libspdk_bdev_iscsi.so 00:04:41.600 SO libspdk_bdev_raid.so.6.0 00:04:41.857 SYMLINK libspdk_bdev_raid.so 00:04:41.857 LIB libspdk_bdev_virtio.a 00:04:42.114 SO libspdk_bdev_virtio.so.6.0 00:04:42.114 SYMLINK libspdk_bdev_virtio.so 00:04:42.372 LIB libspdk_bdev_nvme.a 00:04:42.372 SO libspdk_bdev_nvme.so.7.0 00:04:42.630 SYMLINK libspdk_bdev_nvme.so 00:04:43.196 CC module/event/subsystems/scheduler/scheduler.o 00:04:43.196 CC module/event/subsystems/sock/sock.o 00:04:43.196 CC module/event/subsystems/keyring/keyring.o 00:04:43.196 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:43.196 CC module/event/subsystems/iobuf/iobuf.o 00:04:43.196 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:43.196 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:43.196 CC module/event/subsystems/vmd/vmd.o 00:04:43.196 LIB libspdk_event_vhost_blk.a 00:04:43.196 LIB libspdk_event_keyring.a 00:04:43.196 SO libspdk_event_vhost_blk.so.3.0 00:04:43.196 SO libspdk_event_keyring.so.1.0 00:04:43.196 LIB libspdk_event_scheduler.a 00:04:43.196 LIB libspdk_event_sock.a 00:04:43.196 SYMLINK libspdk_event_keyring.so 00:04:43.196 SYMLINK libspdk_event_vhost_blk.so 00:04:43.196 SO libspdk_event_scheduler.so.4.0 00:04:43.196 LIB libspdk_event_vmd.a 00:04:43.196 SO libspdk_event_sock.so.5.0 00:04:43.196 LIB libspdk_event_iobuf.a 00:04:43.455 SYMLINK libspdk_event_scheduler.so 00:04:43.455 SO libspdk_event_vmd.so.6.0 00:04:43.455 SO libspdk_event_iobuf.so.3.0 00:04:43.455 SYMLINK libspdk_event_sock.so 00:04:43.455 SYMLINK libspdk_event_vmd.so 00:04:43.455 SYMLINK libspdk_event_iobuf.so 00:04:43.713 CC module/event/subsystems/accel/accel.o 00:04:43.713 LIB libspdk_event_accel.a 00:04:43.971 SO libspdk_event_accel.so.6.0 00:04:43.971 SYMLINK libspdk_event_accel.so 00:04:44.229 CC module/event/subsystems/bdev/bdev.o 00:04:44.229 LIB libspdk_event_bdev.a 00:04:44.487 SO libspdk_event_bdev.so.6.0 00:04:44.487 SYMLINK libspdk_event_bdev.so 00:04:44.746 CC module/event/subsystems/nbd/nbd.o 00:04:44.746 CC module/event/subsystems/scsi/scsi.o 00:04:44.746 CC module/event/subsystems/ublk/ublk.o 00:04:44.746 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:44.746 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:44.746 LIB libspdk_event_nbd.a 00:04:44.746 SO libspdk_event_nbd.so.6.0 00:04:44.746 LIB libspdk_event_scsi.a 00:04:45.004 LIB libspdk_event_ublk.a 00:04:45.004 SO libspdk_event_scsi.so.6.0 00:04:45.004 SYMLINK libspdk_event_nbd.so 00:04:45.004 SO libspdk_event_ublk.so.3.0 00:04:45.004 SYMLINK libspdk_event_scsi.so 00:04:45.004 SYMLINK libspdk_event_ublk.so 00:04:45.004 LIB libspdk_event_nvmf.a 00:04:45.004 SO libspdk_event_nvmf.so.6.0 00:04:45.263 SYMLINK libspdk_event_nvmf.so 00:04:45.263 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:45.263 CC module/event/subsystems/iscsi/iscsi.o 00:04:45.263 LIB libspdk_event_vhost_scsi.a 00:04:45.522 LIB libspdk_event_iscsi.a 00:04:45.522 SO libspdk_event_vhost_scsi.so.3.0 00:04:45.522 SO libspdk_event_iscsi.so.6.0 00:04:45.522 SYMLINK libspdk_event_vhost_scsi.so 00:04:45.522 SYMLINK libspdk_event_iscsi.so 00:04:45.781 SO libspdk.so.6.0 00:04:45.781 SYMLINK libspdk.so 00:04:45.781 CXX app/trace/trace.o 00:04:45.781 CC app/spdk_nvme_identify/identify.o 00:04:45.781 CC app/trace_record/trace_record.o 00:04:46.039 CC app/spdk_lspci/spdk_lspci.o 00:04:46.039 CC app/spdk_nvme_perf/perf.o 00:04:46.039 CC app/nvmf_tgt/nvmf_main.o 00:04:46.039 CC app/iscsi_tgt/iscsi_tgt.o 00:04:46.039 CC app/spdk_tgt/spdk_tgt.o 00:04:46.039 CC examples/accel/perf/accel_perf.o 00:04:46.039 CC test/accel/dif/dif.o 00:04:46.039 LINK spdk_lspci 00:04:46.297 LINK nvmf_tgt 00:04:46.297 LINK spdk_trace_record 00:04:46.297 LINK spdk_tgt 00:04:46.297 LINK iscsi_tgt 00:04:46.297 LINK spdk_trace 00:04:46.555 LINK accel_perf 00:04:46.555 CC app/spdk_nvme_discover/discovery_aer.o 00:04:46.555 LINK dif 00:04:46.555 CC app/spdk_top/spdk_top.o 00:04:46.555 CC test/app/bdev_svc/bdev_svc.o 00:04:46.555 CC app/vhost/vhost.o 00:04:46.813 LINK spdk_nvme_discover 00:04:46.813 CC app/spdk_dd/spdk_dd.o 00:04:46.813 LINK spdk_nvme_identify 00:04:46.813 CC app/fio/nvme/fio_plugin.o 00:04:46.813 LINK spdk_nvme_perf 00:04:46.813 LINK bdev_svc 00:04:46.813 LINK vhost 00:04:47.071 CC examples/bdev/hello_world/hello_bdev.o 00:04:47.071 CC examples/bdev/bdevperf/bdevperf.o 00:04:47.071 CC app/fio/bdev/fio_plugin.o 00:04:47.071 LINK spdk_dd 00:04:47.330 LINK hello_bdev 00:04:47.330 CC examples/blob/hello_world/hello_blob.o 00:04:47.330 CC examples/blob/cli/blobcli.o 00:04:47.330 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:47.330 CC examples/ioat/perf/perf.o 00:04:47.330 LINK spdk_nvme 00:04:47.588 LINK spdk_top 00:04:47.588 CC examples/ioat/verify/verify.o 00:04:47.588 LINK hello_blob 00:04:47.588 LINK spdk_bdev 00:04:47.588 LINK ioat_perf 00:04:47.846 LINK verify 00:04:47.846 CC examples/nvme/hello_world/hello_world.o 00:04:47.846 CC examples/sock/hello_world/hello_sock.o 00:04:47.846 CC examples/nvme/reconnect/reconnect.o 00:04:47.846 LINK blobcli 00:04:47.846 LINK nvme_fuzz 00:04:47.846 LINK bdevperf 00:04:47.846 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:48.103 LINK hello_world 00:04:48.103 CC examples/vmd/lsvmd/lsvmd.o 00:04:48.103 CC test/app/histogram_perf/histogram_perf.o 00:04:48.103 LINK hello_sock 00:04:48.103 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:48.103 LINK lsvmd 00:04:48.103 LINK reconnect 00:04:48.103 CC examples/nvmf/nvmf/nvmf.o 00:04:48.103 CC test/app/jsoncat/jsoncat.o 00:04:48.103 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:48.103 LINK histogram_perf 00:04:48.361 CC test/app/stub/stub.o 00:04:48.361 LINK jsoncat 00:04:48.361 LINK nvme_manage 00:04:48.361 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:48.361 CC examples/vmd/led/led.o 00:04:48.361 CC test/bdev/bdevio/bdevio.o 00:04:48.619 TEST_HEADER include/spdk/accel.h 00:04:48.619 TEST_HEADER include/spdk/accel_module.h 00:04:48.619 TEST_HEADER include/spdk/assert.h 00:04:48.619 TEST_HEADER include/spdk/barrier.h 00:04:48.619 TEST_HEADER include/spdk/base64.h 00:04:48.619 TEST_HEADER include/spdk/bdev.h 00:04:48.619 TEST_HEADER include/spdk/bdev_module.h 00:04:48.619 TEST_HEADER include/spdk/bdev_zone.h 00:04:48.619 LINK stub 00:04:48.619 TEST_HEADER include/spdk/bit_array.h 00:04:48.619 TEST_HEADER include/spdk/bit_pool.h 00:04:48.619 TEST_HEADER include/spdk/blob_bdev.h 00:04:48.619 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:48.619 LINK nvmf 00:04:48.619 TEST_HEADER include/spdk/blobfs.h 00:04:48.619 TEST_HEADER include/spdk/blob.h 00:04:48.619 TEST_HEADER include/spdk/conf.h 00:04:48.619 TEST_HEADER include/spdk/config.h 00:04:48.619 TEST_HEADER include/spdk/cpuset.h 00:04:48.619 TEST_HEADER include/spdk/crc16.h 00:04:48.619 TEST_HEADER include/spdk/crc32.h 00:04:48.619 TEST_HEADER include/spdk/crc64.h 00:04:48.619 TEST_HEADER include/spdk/dif.h 00:04:48.619 TEST_HEADER include/spdk/dma.h 00:04:48.619 TEST_HEADER include/spdk/endian.h 00:04:48.619 TEST_HEADER include/spdk/env_dpdk.h 00:04:48.620 TEST_HEADER include/spdk/env.h 00:04:48.620 TEST_HEADER include/spdk/event.h 00:04:48.620 TEST_HEADER include/spdk/fd_group.h 00:04:48.620 TEST_HEADER include/spdk/fd.h 00:04:48.620 TEST_HEADER include/spdk/file.h 00:04:48.620 TEST_HEADER include/spdk/ftl.h 00:04:48.620 TEST_HEADER include/spdk/gpt_spec.h 00:04:48.620 TEST_HEADER include/spdk/hexlify.h 00:04:48.620 TEST_HEADER include/spdk/histogram_data.h 00:04:48.620 TEST_HEADER include/spdk/idxd.h 00:04:48.620 TEST_HEADER include/spdk/idxd_spec.h 00:04:48.620 TEST_HEADER include/spdk/init.h 00:04:48.620 CC test/blobfs/mkfs/mkfs.o 00:04:48.620 TEST_HEADER include/spdk/ioat.h 00:04:48.620 TEST_HEADER include/spdk/ioat_spec.h 00:04:48.620 TEST_HEADER include/spdk/iscsi_spec.h 00:04:48.620 TEST_HEADER include/spdk/json.h 00:04:48.620 TEST_HEADER include/spdk/jsonrpc.h 00:04:48.620 TEST_HEADER include/spdk/keyring.h 00:04:48.620 TEST_HEADER include/spdk/keyring_module.h 00:04:48.620 TEST_HEADER include/spdk/likely.h 00:04:48.620 TEST_HEADER include/spdk/log.h 00:04:48.620 TEST_HEADER include/spdk/lvol.h 00:04:48.620 TEST_HEADER include/spdk/memory.h 00:04:48.620 TEST_HEADER include/spdk/mmio.h 00:04:48.620 TEST_HEADER include/spdk/nbd.h 00:04:48.620 TEST_HEADER include/spdk/notify.h 00:04:48.620 TEST_HEADER include/spdk/nvme.h 00:04:48.620 TEST_HEADER include/spdk/nvme_intel.h 00:04:48.620 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:48.620 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:48.620 TEST_HEADER include/spdk/nvme_spec.h 00:04:48.620 TEST_HEADER include/spdk/nvme_zns.h 00:04:48.620 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:48.620 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:48.620 TEST_HEADER include/spdk/nvmf.h 00:04:48.620 TEST_HEADER include/spdk/nvmf_spec.h 00:04:48.620 TEST_HEADER include/spdk/nvmf_transport.h 00:04:48.620 LINK led 00:04:48.620 TEST_HEADER include/spdk/opal.h 00:04:48.620 TEST_HEADER include/spdk/opal_spec.h 00:04:48.620 TEST_HEADER include/spdk/pci_ids.h 00:04:48.620 TEST_HEADER include/spdk/pipe.h 00:04:48.620 CC examples/nvme/arbitration/arbitration.o 00:04:48.620 TEST_HEADER include/spdk/queue.h 00:04:48.620 TEST_HEADER include/spdk/reduce.h 00:04:48.620 TEST_HEADER include/spdk/rpc.h 00:04:48.620 TEST_HEADER include/spdk/scheduler.h 00:04:48.620 TEST_HEADER include/spdk/scsi.h 00:04:48.620 TEST_HEADER include/spdk/scsi_spec.h 00:04:48.620 TEST_HEADER include/spdk/sock.h 00:04:48.620 TEST_HEADER include/spdk/stdinc.h 00:04:48.620 TEST_HEADER include/spdk/string.h 00:04:48.620 TEST_HEADER include/spdk/thread.h 00:04:48.620 TEST_HEADER include/spdk/trace.h 00:04:48.620 TEST_HEADER include/spdk/trace_parser.h 00:04:48.620 TEST_HEADER include/spdk/tree.h 00:04:48.620 TEST_HEADER include/spdk/ublk.h 00:04:48.620 TEST_HEADER include/spdk/util.h 00:04:48.620 TEST_HEADER include/spdk/uuid.h 00:04:48.620 TEST_HEADER include/spdk/version.h 00:04:48.620 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:48.620 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:48.620 TEST_HEADER include/spdk/vhost.h 00:04:48.620 TEST_HEADER include/spdk/vmd.h 00:04:48.620 TEST_HEADER include/spdk/xor.h 00:04:48.620 TEST_HEADER include/spdk/zipf.h 00:04:48.620 CXX test/cpp_headers/accel.o 00:04:48.878 LINK mkfs 00:04:48.878 CC test/dma/test_dma/test_dma.o 00:04:48.878 LINK vhost_fuzz 00:04:48.878 LINK bdevio 00:04:48.878 CXX test/cpp_headers/accel_module.o 00:04:49.137 CC examples/util/zipf/zipf.o 00:04:49.137 LINK arbitration 00:04:49.137 CC test/env/vtophys/vtophys.o 00:04:49.137 CC examples/nvme/hotplug/hotplug.o 00:04:49.137 CC test/env/mem_callbacks/mem_callbacks.o 00:04:49.137 CXX test/cpp_headers/assert.o 00:04:49.137 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:49.137 CXX test/cpp_headers/barrier.o 00:04:49.137 LINK zipf 00:04:49.137 LINK test_dma 00:04:49.137 LINK vtophys 00:04:49.395 CXX test/cpp_headers/base64.o 00:04:49.395 LINK cmb_copy 00:04:49.395 LINK hotplug 00:04:49.395 CC examples/idxd/perf/perf.o 00:04:49.395 CC examples/thread/thread/thread_ex.o 00:04:49.653 CXX test/cpp_headers/bdev.o 00:04:49.653 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:49.653 CC test/event/event_perf/event_perf.o 00:04:49.653 CC examples/nvme/abort/abort.o 00:04:49.653 CXX test/cpp_headers/bdev_module.o 00:04:49.911 LINK mem_callbacks 00:04:49.911 LINK interrupt_tgt 00:04:49.911 LINK idxd_perf 00:04:49.911 CC test/lvol/esnap/esnap.o 00:04:49.911 LINK iscsi_fuzz 00:04:49.911 LINK event_perf 00:04:49.911 LINK thread 00:04:49.911 CXX test/cpp_headers/bdev_zone.o 00:04:50.169 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:50.169 LINK abort 00:04:50.169 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:50.169 CC test/event/reactor/reactor.o 00:04:50.169 CXX test/cpp_headers/bit_array.o 00:04:50.169 LINK env_dpdk_post_init 00:04:50.169 CC test/rpc_client/rpc_client_test.o 00:04:50.426 CC test/nvme/aer/aer.o 00:04:50.426 LINK reactor 00:04:50.426 LINK pmr_persistence 00:04:50.426 CC test/env/memory/memory_ut.o 00:04:50.426 CC test/env/pci/pci_ut.o 00:04:50.426 CXX test/cpp_headers/bit_pool.o 00:04:50.426 LINK rpc_client_test 00:04:50.426 CXX test/cpp_headers/blob_bdev.o 00:04:50.683 LINK aer 00:04:50.683 CC test/event/reactor_perf/reactor_perf.o 00:04:50.683 CC test/event/app_repeat/app_repeat.o 00:04:50.683 CXX test/cpp_headers/blobfs_bdev.o 00:04:50.942 LINK reactor_perf 00:04:50.942 CC test/event/scheduler/scheduler.o 00:04:50.942 LINK pci_ut 00:04:50.942 LINK app_repeat 00:04:50.942 CC test/nvme/reset/reset.o 00:04:50.942 CC test/thread/poller_perf/poller_perf.o 00:04:50.942 CXX test/cpp_headers/blobfs.o 00:04:51.200 CXX test/cpp_headers/blob.o 00:04:51.200 LINK scheduler 00:04:51.200 LINK poller_perf 00:04:51.200 CXX test/cpp_headers/conf.o 00:04:51.458 LINK reset 00:04:51.458 CC test/nvme/sgl/sgl.o 00:04:51.458 CC test/nvme/e2edp/nvme_dp.o 00:04:51.458 CC test/nvme/overhead/overhead.o 00:04:51.458 CXX test/cpp_headers/config.o 00:04:51.458 LINK memory_ut 00:04:51.458 CXX test/cpp_headers/cpuset.o 00:04:51.458 CXX test/cpp_headers/crc16.o 00:04:51.723 CC test/nvme/err_injection/err_injection.o 00:04:51.723 CXX test/cpp_headers/crc32.o 00:04:51.723 CC test/nvme/startup/startup.o 00:04:51.723 LINK nvme_dp 00:04:51.723 LINK sgl 00:04:51.723 LINK overhead 00:04:51.723 CC test/nvme/reserve/reserve.o 00:04:51.723 CC test/nvme/simple_copy/simple_copy.o 00:04:51.723 CXX test/cpp_headers/crc64.o 00:04:51.723 LINK startup 00:04:51.723 LINK err_injection 00:04:52.013 CC test/nvme/connect_stress/connect_stress.o 00:04:52.013 CC test/nvme/boot_partition/boot_partition.o 00:04:52.013 LINK reserve 00:04:52.013 CXX test/cpp_headers/dif.o 00:04:52.013 CXX test/cpp_headers/dma.o 00:04:52.013 LINK simple_copy 00:04:52.013 CC test/nvme/compliance/nvme_compliance.o 00:04:52.270 LINK connect_stress 00:04:52.270 CC test/nvme/fused_ordering/fused_ordering.o 00:04:52.270 LINK boot_partition 00:04:52.270 CXX test/cpp_headers/endian.o 00:04:52.270 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:52.270 CC test/nvme/fdp/fdp.o 00:04:52.528 CXX test/cpp_headers/env_dpdk.o 00:04:52.528 CXX test/cpp_headers/env.o 00:04:52.528 CC test/nvme/cuse/cuse.o 00:04:52.528 LINK fused_ordering 00:04:52.528 LINK nvme_compliance 00:04:52.528 CXX test/cpp_headers/event.o 00:04:52.528 LINK doorbell_aers 00:04:52.528 CXX test/cpp_headers/fd_group.o 00:04:52.787 CXX test/cpp_headers/fd.o 00:04:52.787 CXX test/cpp_headers/file.o 00:04:52.787 LINK fdp 00:04:52.787 CXX test/cpp_headers/ftl.o 00:04:52.787 CXX test/cpp_headers/gpt_spec.o 00:04:52.787 CXX test/cpp_headers/hexlify.o 00:04:53.047 CXX test/cpp_headers/histogram_data.o 00:04:53.047 CXX test/cpp_headers/idxd.o 00:04:53.047 CXX test/cpp_headers/idxd_spec.o 00:04:53.047 CXX test/cpp_headers/init.o 00:04:53.047 CXX test/cpp_headers/ioat.o 00:04:53.047 CXX test/cpp_headers/ioat_spec.o 00:04:53.047 CXX test/cpp_headers/iscsi_spec.o 00:04:53.047 CXX test/cpp_headers/json.o 00:04:53.047 CXX test/cpp_headers/jsonrpc.o 00:04:53.305 CXX test/cpp_headers/keyring.o 00:04:53.305 CXX test/cpp_headers/keyring_module.o 00:04:53.305 CXX test/cpp_headers/likely.o 00:04:53.305 CXX test/cpp_headers/log.o 00:04:53.305 CXX test/cpp_headers/lvol.o 00:04:53.305 CXX test/cpp_headers/memory.o 00:04:53.305 CXX test/cpp_headers/mmio.o 00:04:53.305 CXX test/cpp_headers/nbd.o 00:04:53.305 CXX test/cpp_headers/notify.o 00:04:53.563 CXX test/cpp_headers/nvme.o 00:04:53.563 CXX test/cpp_headers/nvme_intel.o 00:04:53.563 CXX test/cpp_headers/nvme_ocssd.o 00:04:53.563 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:53.563 CXX test/cpp_headers/nvme_spec.o 00:04:53.563 CXX test/cpp_headers/nvme_zns.o 00:04:53.563 CXX test/cpp_headers/nvmf_cmd.o 00:04:53.821 CXX test/cpp_headers/nvmf.o 00:04:53.821 CXX test/cpp_headers/nvmf_spec.o 00:04:53.821 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:53.821 CXX test/cpp_headers/nvmf_transport.o 00:04:53.821 CXX test/cpp_headers/opal.o 00:04:53.821 CXX test/cpp_headers/opal_spec.o 00:04:53.821 CXX test/cpp_headers/pci_ids.o 00:04:53.821 LINK cuse 00:04:54.079 CXX test/cpp_headers/pipe.o 00:04:54.079 CXX test/cpp_headers/queue.o 00:04:54.079 CXX test/cpp_headers/reduce.o 00:04:54.079 CXX test/cpp_headers/rpc.o 00:04:54.079 CXX test/cpp_headers/scheduler.o 00:04:54.337 CXX test/cpp_headers/scsi.o 00:04:54.337 CXX test/cpp_headers/scsi_spec.o 00:04:54.337 CXX test/cpp_headers/sock.o 00:04:54.337 CXX test/cpp_headers/stdinc.o 00:04:54.337 CXX test/cpp_headers/string.o 00:04:54.337 CXX test/cpp_headers/thread.o 00:04:54.595 CXX test/cpp_headers/trace.o 00:04:54.595 CXX test/cpp_headers/trace_parser.o 00:04:54.595 CXX test/cpp_headers/tree.o 00:04:54.595 CXX test/cpp_headers/ublk.o 00:04:54.595 CXX test/cpp_headers/util.o 00:04:54.595 CXX test/cpp_headers/uuid.o 00:04:54.595 CXX test/cpp_headers/version.o 00:04:54.595 CXX test/cpp_headers/vfio_user_pci.o 00:04:54.595 CXX test/cpp_headers/vfio_user_spec.o 00:04:54.595 CXX test/cpp_headers/vhost.o 00:04:54.853 CXX test/cpp_headers/vmd.o 00:04:54.853 CXX test/cpp_headers/xor.o 00:04:54.853 CXX test/cpp_headers/zipf.o 00:04:55.787 LINK esnap 00:04:57.687 00:04:57.687 real 1m13.361s 00:04:57.687 user 7m45.520s 00:04:57.687 sys 1m48.701s 00:04:57.687 16:16:31 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:04:57.687 16:16:31 -- common/autotest_common.sh@10 -- $ set +x 00:04:57.687 ************************************ 00:04:57.687 END TEST make 00:04:57.687 ************************************ 00:04:57.687 16:16:31 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:57.687 16:16:31 -- pm/common@30 -- $ signal_monitor_resources TERM 00:04:57.687 16:16:31 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:04:57.687 16:16:31 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:57.687 16:16:31 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:57.687 16:16:31 -- pm/common@45 -- $ pid=5319 00:04:57.687 16:16:31 -- pm/common@52 -- $ sudo kill -TERM 5319 00:04:57.687 16:16:31 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:57.687 16:16:31 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:57.687 16:16:31 -- pm/common@45 -- $ pid=5320 00:04:57.687 16:16:31 -- pm/common@52 -- $ sudo kill -TERM 5320 00:04:57.687 16:16:31 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:57.687 16:16:31 -- nvmf/common.sh@7 -- # uname -s 00:04:57.687 16:16:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:57.687 16:16:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:57.687 16:16:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:57.687 16:16:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:57.687 16:16:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:57.687 16:16:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:57.687 16:16:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:57.687 16:16:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:57.687 16:16:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:57.687 16:16:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:57.687 16:16:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:04:57.687 16:16:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:04:57.687 16:16:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:57.687 16:16:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:57.687 16:16:31 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:57.687 16:16:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:57.687 16:16:31 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:57.687 16:16:31 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:57.687 16:16:31 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:57.687 16:16:31 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:57.687 16:16:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.687 16:16:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.687 16:16:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.687 16:16:31 -- paths/export.sh@5 -- # export PATH 00:04:57.688 16:16:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.688 16:16:31 -- nvmf/common.sh@47 -- # : 0 00:04:57.688 16:16:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:57.688 16:16:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:57.688 16:16:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:57.688 16:16:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:57.688 16:16:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:57.688 16:16:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:57.688 16:16:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:57.688 16:16:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:57.688 16:16:31 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:57.688 16:16:31 -- spdk/autotest.sh@32 -- # uname -s 00:04:57.688 16:16:31 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:57.688 16:16:31 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:57.688 16:16:31 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:57.688 16:16:31 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:57.688 16:16:31 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:57.688 16:16:31 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:57.688 16:16:31 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:57.688 16:16:31 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:57.688 16:16:31 -- spdk/autotest.sh@48 -- # udevadm_pid=54169 00:04:57.688 16:16:31 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:57.688 16:16:31 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:57.688 16:16:31 -- pm/common@17 -- # local monitor 00:04:57.688 16:16:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:57.688 16:16:31 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=54170 00:04:57.688 16:16:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:57.688 16:16:31 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=54171 00:04:57.688 16:16:31 -- pm/common@26 -- # sleep 1 00:04:57.688 16:16:31 -- pm/common@21 -- # date +%s 00:04:57.688 16:16:31 -- pm/common@21 -- # date +%s 00:04:57.688 16:16:31 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1713370591 00:04:57.688 16:16:31 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1713370591 00:04:57.688 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1713370591_collect-cpu-load.pm.log 00:04:57.688 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1713370591_collect-vmstat.pm.log 00:04:58.622 16:16:32 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:58.622 16:16:32 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:58.622 16:16:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:58.622 16:16:32 -- common/autotest_common.sh@10 -- # set +x 00:04:58.622 16:16:32 -- spdk/autotest.sh@59 -- # create_test_list 00:04:58.622 16:16:32 -- common/autotest_common.sh@734 -- # xtrace_disable 00:04:58.622 16:16:32 -- common/autotest_common.sh@10 -- # set +x 00:04:58.622 16:16:32 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:58.622 16:16:32 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:58.622 16:16:32 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:58.622 16:16:32 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:58.622 16:16:32 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:58.622 16:16:32 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:58.622 16:16:32 -- common/autotest_common.sh@1441 -- # uname 00:04:58.622 16:16:32 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:04:58.622 16:16:32 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:58.622 16:16:32 -- common/autotest_common.sh@1461 -- # uname 00:04:58.622 16:16:32 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:04:58.622 16:16:32 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:58.622 16:16:32 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:58.622 16:16:32 -- spdk/autotest.sh@72 -- # hash lcov 00:04:58.622 16:16:32 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:58.622 16:16:32 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:58.622 --rc lcov_branch_coverage=1 00:04:58.622 --rc lcov_function_coverage=1 00:04:58.622 --rc genhtml_branch_coverage=1 00:04:58.622 --rc genhtml_function_coverage=1 00:04:58.622 --rc genhtml_legend=1 00:04:58.622 --rc geninfo_all_blocks=1 00:04:58.622 ' 00:04:58.622 16:16:32 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:58.622 --rc lcov_branch_coverage=1 00:04:58.622 --rc lcov_function_coverage=1 00:04:58.622 --rc genhtml_branch_coverage=1 00:04:58.622 --rc genhtml_function_coverage=1 00:04:58.622 --rc genhtml_legend=1 00:04:58.622 --rc geninfo_all_blocks=1 00:04:58.622 ' 00:04:58.622 16:16:32 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:58.622 --rc lcov_branch_coverage=1 00:04:58.622 --rc lcov_function_coverage=1 00:04:58.622 --rc genhtml_branch_coverage=1 00:04:58.623 --rc genhtml_function_coverage=1 00:04:58.623 --rc genhtml_legend=1 00:04:58.623 --rc geninfo_all_blocks=1 00:04:58.623 --no-external' 00:04:58.623 16:16:32 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:58.623 --rc lcov_branch_coverage=1 00:04:58.623 --rc lcov_function_coverage=1 00:04:58.623 --rc genhtml_branch_coverage=1 00:04:58.623 --rc genhtml_function_coverage=1 00:04:58.623 --rc genhtml_legend=1 00:04:58.623 --rc geninfo_all_blocks=1 00:04:58.623 --no-external' 00:04:58.623 16:16:32 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:58.881 lcov: LCOV version 1.14 00:04:58.881 16:16:32 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:07.019 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:05:07.019 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:05:07.019 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:05:07.019 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:05:07.019 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:05:07.019 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:05:13.581 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:13.581 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:25.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:25.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:25.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:25.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:25.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:25.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:25.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:25.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:25.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:25.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:25.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:25.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:25.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:25.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:25.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:25.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:25.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:25.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:25.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:25.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:25.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:25.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:25.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:25.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:25.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:25.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:25.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:25.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:25.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:25.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:25.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:25.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:25.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:25.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:25.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:25.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:25.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:25.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:25.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:25.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:25.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:25.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:25.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:25.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:25.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:25.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:25.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:25.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:25.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:25.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:25.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:25.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:25.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:25.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:25.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:25.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:25.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:25.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:25.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:25.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:25.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:25.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:25.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:25.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:25.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:25.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:25.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:25.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:25.822 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:25.822 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:25.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:25.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:25.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:25.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:25.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:25.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:25.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:25.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:25.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:25.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:25.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:25.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:25.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:25.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:05:25.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:25.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:05:25.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:25.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:25.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:25.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:25.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:25.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:25.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:25.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:25.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:25.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:25.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:25.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:25.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:25.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:25.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:25.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:25.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:25.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:25.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:25.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:25.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:25.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:25.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:25.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:25.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:25.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:25.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:25.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:25.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:25.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:25.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:25.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:25.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:25.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:25.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:25.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:25.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:25.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:26.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:26.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:26.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:26.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:26.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:26.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:26.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:26.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:26.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:26.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:26.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:26.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:26.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:26.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:26.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:26.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:26.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:26.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:26.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:26.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:26.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:26.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:26.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:26.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:26.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:26.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:26.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:26.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:26.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:26.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:26.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:26.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:26.081 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:26.081 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:26.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:26.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:26.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:26.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:26.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:26.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:26.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:26.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:26.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:26.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:26.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:26.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:26.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:26.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:26.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:26.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:26.082 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:26.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:30.283 16:17:03 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:30.283 16:17:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:30.283 16:17:03 -- common/autotest_common.sh@10 -- # set +x 00:05:30.283 16:17:03 -- spdk/autotest.sh@91 -- # rm -f 00:05:30.283 16:17:03 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:30.283 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:30.283 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:30.283 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:30.283 16:17:04 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:30.283 16:17:04 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:30.283 16:17:04 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:30.283 16:17:04 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:30.283 16:17:04 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:30.283 16:17:04 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:30.283 16:17:04 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:30.283 16:17:04 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:30.283 16:17:04 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:30.283 16:17:04 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:30.283 16:17:04 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:30.283 16:17:04 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:30.283 16:17:04 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:30.283 16:17:04 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:30.283 16:17:04 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:30.283 16:17:04 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:05:30.283 16:17:04 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:05:30.283 16:17:04 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:30.283 16:17:04 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:30.283 16:17:04 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:30.283 16:17:04 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:05:30.283 16:17:04 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:05:30.283 16:17:04 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:30.283 16:17:04 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:30.283 16:17:04 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:30.283 16:17:04 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:30.283 16:17:04 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:30.283 16:17:04 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:30.283 16:17:04 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:30.283 16:17:04 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:30.283 No valid GPT data, bailing 00:05:30.283 16:17:04 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:30.283 16:17:04 -- scripts/common.sh@391 -- # pt= 00:05:30.283 16:17:04 -- scripts/common.sh@392 -- # return 1 00:05:30.283 16:17:04 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:30.283 1+0 records in 00:05:30.283 1+0 records out 00:05:30.283 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00458958 s, 228 MB/s 00:05:30.283 16:17:04 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:30.283 16:17:04 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:30.283 16:17:04 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:05:30.283 16:17:04 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:05:30.283 16:17:04 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:30.541 No valid GPT data, bailing 00:05:30.541 16:17:04 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:30.541 16:17:04 -- scripts/common.sh@391 -- # pt= 00:05:30.541 16:17:04 -- scripts/common.sh@392 -- # return 1 00:05:30.541 16:17:04 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:30.541 1+0 records in 00:05:30.541 1+0 records out 00:05:30.541 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00497481 s, 211 MB/s 00:05:30.541 16:17:04 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:30.541 16:17:04 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:30.541 16:17:04 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:05:30.541 16:17:04 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:05:30.541 16:17:04 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:30.541 No valid GPT data, bailing 00:05:30.541 16:17:04 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:30.541 16:17:04 -- scripts/common.sh@391 -- # pt= 00:05:30.541 16:17:04 -- scripts/common.sh@392 -- # return 1 00:05:30.541 16:17:04 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:30.541 1+0 records in 00:05:30.541 1+0 records out 00:05:30.541 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00526585 s, 199 MB/s 00:05:30.541 16:17:04 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:30.541 16:17:04 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:30.541 16:17:04 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:05:30.541 16:17:04 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:05:30.541 16:17:04 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:30.541 No valid GPT data, bailing 00:05:30.541 16:17:04 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:30.541 16:17:04 -- scripts/common.sh@391 -- # pt= 00:05:30.541 16:17:04 -- scripts/common.sh@392 -- # return 1 00:05:30.541 16:17:04 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:30.541 1+0 records in 00:05:30.541 1+0 records out 00:05:30.541 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00497795 s, 211 MB/s 00:05:30.541 16:17:04 -- spdk/autotest.sh@118 -- # sync 00:05:30.541 16:17:04 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:30.541 16:17:04 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:30.541 16:17:04 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:32.443 16:17:06 -- spdk/autotest.sh@124 -- # uname -s 00:05:32.443 16:17:06 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:32.443 16:17:06 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:32.443 16:17:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:32.443 16:17:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.443 16:17:06 -- common/autotest_common.sh@10 -- # set +x 00:05:32.701 ************************************ 00:05:32.702 START TEST setup.sh 00:05:32.702 ************************************ 00:05:32.702 16:17:06 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:32.702 * Looking for test storage... 00:05:32.702 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:32.702 16:17:06 -- setup/test-setup.sh@10 -- # uname -s 00:05:32.702 16:17:06 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:32.702 16:17:06 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:32.702 16:17:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:32.702 16:17:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.702 16:17:06 -- common/autotest_common.sh@10 -- # set +x 00:05:32.702 ************************************ 00:05:32.702 START TEST acl 00:05:32.702 ************************************ 00:05:32.702 16:17:06 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:32.961 * Looking for test storage... 00:05:32.961 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:32.961 16:17:06 -- setup/acl.sh@10 -- # get_zoned_devs 00:05:32.961 16:17:06 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:32.961 16:17:06 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:32.961 16:17:06 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:32.961 16:17:06 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:32.961 16:17:06 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:32.961 16:17:06 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:32.961 16:17:06 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:32.961 16:17:06 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:32.961 16:17:06 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:32.961 16:17:06 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:32.961 16:17:06 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:32.961 16:17:06 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:32.961 16:17:06 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:32.961 16:17:06 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:32.961 16:17:06 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:05:32.961 16:17:06 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:05:32.961 16:17:06 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:32.961 16:17:06 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:32.961 16:17:06 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:32.961 16:17:06 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:05:32.961 16:17:06 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:05:32.961 16:17:06 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:32.961 16:17:06 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:32.961 16:17:06 -- setup/acl.sh@12 -- # devs=() 00:05:32.961 16:17:06 -- setup/acl.sh@12 -- # declare -a devs 00:05:32.961 16:17:06 -- setup/acl.sh@13 -- # drivers=() 00:05:32.961 16:17:06 -- setup/acl.sh@13 -- # declare -A drivers 00:05:32.961 16:17:06 -- setup/acl.sh@51 -- # setup reset 00:05:32.961 16:17:06 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:32.961 16:17:06 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:33.897 16:17:07 -- setup/acl.sh@52 -- # collect_setup_devs 00:05:33.897 16:17:07 -- setup/acl.sh@16 -- # local dev driver 00:05:33.897 16:17:07 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:33.897 16:17:07 -- setup/acl.sh@15 -- # setup output status 00:05:33.897 16:17:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:33.897 16:17:07 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:34.465 16:17:08 -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:05:34.465 16:17:08 -- setup/acl.sh@19 -- # continue 00:05:34.465 16:17:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:34.465 Hugepages 00:05:34.465 node hugesize free / total 00:05:34.465 16:17:08 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:34.465 16:17:08 -- setup/acl.sh@19 -- # continue 00:05:34.465 16:17:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:34.465 00:05:34.465 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:34.465 16:17:08 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:34.465 16:17:08 -- setup/acl.sh@19 -- # continue 00:05:34.465 16:17:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:34.465 16:17:08 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:34.465 16:17:08 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:34.465 16:17:08 -- setup/acl.sh@20 -- # continue 00:05:34.465 16:17:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:34.465 16:17:08 -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:05:34.465 16:17:08 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:34.465 16:17:08 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:34.465 16:17:08 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:34.465 16:17:08 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:34.465 16:17:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:34.724 16:17:08 -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:05:34.724 16:17:08 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:34.724 16:17:08 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:34.724 16:17:08 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:34.724 16:17:08 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:34.724 16:17:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:34.724 16:17:08 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:05:34.724 16:17:08 -- setup/acl.sh@54 -- # run_test denied denied 00:05:34.724 16:17:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:34.724 16:17:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.724 16:17:08 -- common/autotest_common.sh@10 -- # set +x 00:05:34.724 ************************************ 00:05:34.724 START TEST denied 00:05:34.724 ************************************ 00:05:34.724 16:17:08 -- common/autotest_common.sh@1111 -- # denied 00:05:34.724 16:17:08 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:05:34.724 16:17:08 -- setup/acl.sh@38 -- # setup output config 00:05:34.724 16:17:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:34.724 16:17:08 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:05:34.724 16:17:08 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:35.660 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:05:35.660 16:17:09 -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:05:35.660 16:17:09 -- setup/acl.sh@28 -- # local dev driver 00:05:35.660 16:17:09 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:35.660 16:17:09 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:05:35.660 16:17:09 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:05:35.660 16:17:09 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:35.660 16:17:09 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:35.660 16:17:09 -- setup/acl.sh@41 -- # setup reset 00:05:35.660 16:17:09 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:35.660 16:17:09 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:36.227 ************************************ 00:05:36.227 END TEST denied 00:05:36.227 ************************************ 00:05:36.227 00:05:36.227 real 0m1.434s 00:05:36.227 user 0m0.566s 00:05:36.227 sys 0m0.824s 00:05:36.227 16:17:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:36.227 16:17:10 -- common/autotest_common.sh@10 -- # set +x 00:05:36.227 16:17:10 -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:36.227 16:17:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:36.227 16:17:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:36.227 16:17:10 -- common/autotest_common.sh@10 -- # set +x 00:05:36.227 ************************************ 00:05:36.227 START TEST allowed 00:05:36.227 ************************************ 00:05:36.227 16:17:10 -- common/autotest_common.sh@1111 -- # allowed 00:05:36.227 16:17:10 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:05:36.227 16:17:10 -- setup/acl.sh@45 -- # setup output config 00:05:36.227 16:17:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:36.227 16:17:10 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:36.227 16:17:10 -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:05:37.163 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:37.163 16:17:11 -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:05:37.163 16:17:11 -- setup/acl.sh@28 -- # local dev driver 00:05:37.163 16:17:11 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:37.163 16:17:11 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:05:37.163 16:17:11 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:05:37.163 16:17:11 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:37.163 16:17:11 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:37.163 16:17:11 -- setup/acl.sh@48 -- # setup reset 00:05:37.163 16:17:11 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:37.163 16:17:11 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:38.099 00:05:38.099 real 0m1.580s 00:05:38.099 user 0m0.688s 00:05:38.099 sys 0m0.890s 00:05:38.099 16:17:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:38.099 16:17:11 -- common/autotest_common.sh@10 -- # set +x 00:05:38.099 ************************************ 00:05:38.099 END TEST allowed 00:05:38.099 ************************************ 00:05:38.099 00:05:38.099 real 0m5.105s 00:05:38.099 user 0m2.189s 00:05:38.099 sys 0m2.856s 00:05:38.100 16:17:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:38.100 16:17:11 -- common/autotest_common.sh@10 -- # set +x 00:05:38.100 ************************************ 00:05:38.100 END TEST acl 00:05:38.100 ************************************ 00:05:38.100 16:17:11 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:38.100 16:17:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:38.100 16:17:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.100 16:17:11 -- common/autotest_common.sh@10 -- # set +x 00:05:38.100 ************************************ 00:05:38.100 START TEST hugepages 00:05:38.100 ************************************ 00:05:38.100 16:17:11 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:38.100 * Looking for test storage... 00:05:38.100 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:38.100 16:17:12 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:38.100 16:17:12 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:38.100 16:17:12 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:38.100 16:17:12 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:38.100 16:17:12 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:38.100 16:17:12 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:38.100 16:17:12 -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:38.100 16:17:12 -- setup/common.sh@18 -- # local node= 00:05:38.100 16:17:12 -- setup/common.sh@19 -- # local var val 00:05:38.100 16:17:12 -- setup/common.sh@20 -- # local mem_f mem 00:05:38.100 16:17:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:38.100 16:17:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:38.100 16:17:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:38.100 16:17:12 -- setup/common.sh@28 -- # mapfile -t mem 00:05:38.100 16:17:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.100 16:17:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 5435856 kB' 'MemAvailable: 7361320 kB' 'Buffers: 2436 kB' 'Cached: 2135208 kB' 'SwapCached: 0 kB' 'Active: 876060 kB' 'Inactive: 1367808 kB' 'Active(anon): 116712 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1367808 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 688 kB' 'Writeback: 0 kB' 'AnonPages: 107892 kB' 'Mapped: 48796 kB' 'Shmem: 10488 kB' 'KReclaimable: 70492 kB' 'Slab: 147232 kB' 'SReclaimable: 70492 kB' 'SUnreclaim: 76740 kB' 'KernelStack: 6624 kB' 'PageTables: 4580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 331488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.100 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.100 16:17:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.101 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.101 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.101 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.101 16:17:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.101 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.101 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.101 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.101 16:17:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.101 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.101 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.101 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.101 16:17:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.101 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.101 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.101 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.101 16:17:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.101 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.101 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.101 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.101 16:17:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.101 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.101 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.101 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.101 16:17:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.101 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.101 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.101 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.101 16:17:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.101 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.101 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.101 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.101 16:17:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.101 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.101 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.101 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.101 16:17:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.101 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.101 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.101 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.101 16:17:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.101 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.101 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.101 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.101 16:17:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.101 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.101 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.101 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.101 16:17:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.101 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.101 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.101 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.101 16:17:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.101 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.101 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.101 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.101 16:17:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.101 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.101 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.101 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.101 16:17:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.101 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.101 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.101 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.101 16:17:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.101 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.101 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.101 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.101 16:17:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.101 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.101 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.101 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.101 16:17:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.101 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.101 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.101 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.101 16:17:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.101 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.101 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.101 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.101 16:17:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.101 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.101 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.101 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.101 16:17:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.101 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.101 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.101 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.101 16:17:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.101 16:17:12 -- setup/common.sh@32 -- # continue 00:05:38.101 16:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.101 16:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.101 16:17:12 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:38.101 16:17:12 -- setup/common.sh@33 -- # echo 2048 00:05:38.101 16:17:12 -- setup/common.sh@33 -- # return 0 00:05:38.101 16:17:12 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:38.101 16:17:12 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:38.101 16:17:12 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:38.101 16:17:12 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:38.101 16:17:12 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:38.101 16:17:12 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:38.101 16:17:12 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:38.101 16:17:12 -- setup/hugepages.sh@207 -- # get_nodes 00:05:38.101 16:17:12 -- setup/hugepages.sh@27 -- # local node 00:05:38.101 16:17:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:38.101 16:17:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:38.101 16:17:12 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:38.101 16:17:12 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:38.101 16:17:12 -- setup/hugepages.sh@208 -- # clear_hp 00:05:38.101 16:17:12 -- setup/hugepages.sh@37 -- # local node hp 00:05:38.101 16:17:12 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:38.101 16:17:12 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:38.101 16:17:12 -- setup/hugepages.sh@41 -- # echo 0 00:05:38.101 16:17:12 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:38.101 16:17:12 -- setup/hugepages.sh@41 -- # echo 0 00:05:38.101 16:17:12 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:38.101 16:17:12 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:38.101 16:17:12 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:38.101 16:17:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:38.101 16:17:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.101 16:17:12 -- common/autotest_common.sh@10 -- # set +x 00:05:38.101 ************************************ 00:05:38.101 START TEST default_setup 00:05:38.101 ************************************ 00:05:38.101 16:17:12 -- common/autotest_common.sh@1111 -- # default_setup 00:05:38.101 16:17:12 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:38.101 16:17:12 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:38.101 16:17:12 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:38.101 16:17:12 -- setup/hugepages.sh@51 -- # shift 00:05:38.101 16:17:12 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:38.101 16:17:12 -- setup/hugepages.sh@52 -- # local node_ids 00:05:38.101 16:17:12 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:38.101 16:17:12 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:38.101 16:17:12 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:38.101 16:17:12 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:38.101 16:17:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:38.101 16:17:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:38.101 16:17:12 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:38.101 16:17:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:38.101 16:17:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:38.101 16:17:12 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:38.101 16:17:12 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:38.101 16:17:12 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:38.101 16:17:12 -- setup/hugepages.sh@73 -- # return 0 00:05:38.101 16:17:12 -- setup/hugepages.sh@137 -- # setup output 00:05:38.101 16:17:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:38.101 16:17:12 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:39.038 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:39.038 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:39.038 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:39.038 16:17:13 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:39.038 16:17:13 -- setup/hugepages.sh@89 -- # local node 00:05:39.038 16:17:13 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:39.038 16:17:13 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:39.038 16:17:13 -- setup/hugepages.sh@92 -- # local surp 00:05:39.038 16:17:13 -- setup/hugepages.sh@93 -- # local resv 00:05:39.038 16:17:13 -- setup/hugepages.sh@94 -- # local anon 00:05:39.038 16:17:13 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:39.038 16:17:13 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:39.038 16:17:13 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:39.038 16:17:13 -- setup/common.sh@18 -- # local node= 00:05:39.038 16:17:13 -- setup/common.sh@19 -- # local var val 00:05:39.038 16:17:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:39.038 16:17:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.038 16:17:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.038 16:17:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.038 16:17:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.038 16:17:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.038 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.038 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.039 16:17:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7549332 kB' 'MemAvailable: 9474660 kB' 'Buffers: 2436 kB' 'Cached: 2135200 kB' 'SwapCached: 0 kB' 'Active: 892456 kB' 'Inactive: 1367816 kB' 'Active(anon): 133108 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1367816 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 876 kB' 'Writeback: 0 kB' 'AnonPages: 124304 kB' 'Mapped: 48948 kB' 'Shmem: 10464 kB' 'KReclaimable: 70208 kB' 'Slab: 146716 kB' 'SReclaimable: 70208 kB' 'SUnreclaim: 76508 kB' 'KernelStack: 6536 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.039 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.039 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.040 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.040 16:17:13 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.040 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.040 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.040 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.040 16:17:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.040 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.040 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.040 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.040 16:17:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.040 16:17:13 -- setup/common.sh@33 -- # echo 0 00:05:39.040 16:17:13 -- setup/common.sh@33 -- # return 0 00:05:39.040 16:17:13 -- setup/hugepages.sh@97 -- # anon=0 00:05:39.040 16:17:13 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:39.040 16:17:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:39.040 16:17:13 -- setup/common.sh@18 -- # local node= 00:05:39.040 16:17:13 -- setup/common.sh@19 -- # local var val 00:05:39.040 16:17:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:39.040 16:17:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.040 16:17:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.040 16:17:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.040 16:17:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.040 16:17:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.040 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.040 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.040 16:17:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7550008 kB' 'MemAvailable: 9475336 kB' 'Buffers: 2436 kB' 'Cached: 2135200 kB' 'SwapCached: 0 kB' 'Active: 892160 kB' 'Inactive: 1367816 kB' 'Active(anon): 132812 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1367816 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 876 kB' 'Writeback: 0 kB' 'AnonPages: 124044 kB' 'Mapped: 48820 kB' 'Shmem: 10464 kB' 'KReclaimable: 70208 kB' 'Slab: 146712 kB' 'SReclaimable: 70208 kB' 'SUnreclaim: 76504 kB' 'KernelStack: 6568 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:39.040 16:17:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.040 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.040 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.040 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.040 16:17:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.040 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.040 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.040 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.040 16:17:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.040 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.040 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.040 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.040 16:17:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.040 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.040 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.040 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.040 16:17:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.040 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.040 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.040 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.040 16:17:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.040 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.040 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.040 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.040 16:17:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.040 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.040 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.040 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.040 16:17:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.040 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.040 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.040 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.040 16:17:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.040 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.040 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.040 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.040 16:17:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.040 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.040 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.040 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.040 16:17:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.040 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.040 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.040 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.040 16:17:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.040 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.040 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.040 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.040 16:17:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.040 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.040 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.040 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.040 16:17:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.040 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.040 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.040 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.040 16:17:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.040 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.040 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.040 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.040 16:17:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.040 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.040 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.040 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.040 16:17:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.040 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.040 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.040 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.040 16:17:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.040 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.040 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.040 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.040 16:17:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.040 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.040 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.040 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.336 16:17:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.336 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.336 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.336 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.336 16:17:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.336 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.336 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.336 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.336 16:17:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.336 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.336 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.336 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.336 16:17:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.336 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.336 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.336 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.336 16:17:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.336 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.336 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.336 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.336 16:17:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.336 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.336 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.336 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.336 16:17:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.336 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.336 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.336 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.336 16:17:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.336 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.336 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.336 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.336 16:17:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.336 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.336 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.336 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.336 16:17:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.336 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.336 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.336 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.336 16:17:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.336 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.336 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.336 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.336 16:17:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.336 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.336 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.336 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.336 16:17:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.336 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.336 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.336 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.337 16:17:13 -- setup/common.sh@33 -- # echo 0 00:05:39.337 16:17:13 -- setup/common.sh@33 -- # return 0 00:05:39.337 16:17:13 -- setup/hugepages.sh@99 -- # surp=0 00:05:39.337 16:17:13 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:39.337 16:17:13 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:39.337 16:17:13 -- setup/common.sh@18 -- # local node= 00:05:39.337 16:17:13 -- setup/common.sh@19 -- # local var val 00:05:39.337 16:17:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:39.337 16:17:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.337 16:17:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.337 16:17:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.337 16:17:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.337 16:17:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.337 16:17:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7550008 kB' 'MemAvailable: 9475336 kB' 'Buffers: 2436 kB' 'Cached: 2135200 kB' 'SwapCached: 0 kB' 'Active: 892200 kB' 'Inactive: 1367816 kB' 'Active(anon): 132852 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1367816 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 876 kB' 'Writeback: 0 kB' 'AnonPages: 124060 kB' 'Mapped: 48820 kB' 'Shmem: 10464 kB' 'KReclaimable: 70208 kB' 'Slab: 146712 kB' 'SReclaimable: 70208 kB' 'SUnreclaim: 76504 kB' 'KernelStack: 6568 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.337 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.337 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.338 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.338 16:17:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.338 16:17:13 -- setup/common.sh@33 -- # echo 0 00:05:39.338 16:17:13 -- setup/common.sh@33 -- # return 0 00:05:39.338 16:17:13 -- setup/hugepages.sh@100 -- # resv=0 00:05:39.339 nr_hugepages=1024 00:05:39.339 16:17:13 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:39.339 resv_hugepages=0 00:05:39.339 16:17:13 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:39.339 surplus_hugepages=0 00:05:39.339 16:17:13 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:39.339 anon_hugepages=0 00:05:39.339 16:17:13 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:39.339 16:17:13 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:39.339 16:17:13 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:39.339 16:17:13 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:39.339 16:17:13 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:39.339 16:17:13 -- setup/common.sh@18 -- # local node= 00:05:39.339 16:17:13 -- setup/common.sh@19 -- # local var val 00:05:39.339 16:17:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:39.339 16:17:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.339 16:17:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.339 16:17:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.339 16:17:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.339 16:17:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.339 16:17:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7550008 kB' 'MemAvailable: 9475336 kB' 'Buffers: 2436 kB' 'Cached: 2135200 kB' 'SwapCached: 0 kB' 'Active: 892252 kB' 'Inactive: 1367816 kB' 'Active(anon): 132904 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1367816 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 876 kB' 'Writeback: 0 kB' 'AnonPages: 124104 kB' 'Mapped: 48820 kB' 'Shmem: 10464 kB' 'KReclaimable: 70208 kB' 'Slab: 146712 kB' 'SReclaimable: 70208 kB' 'SUnreclaim: 76504 kB' 'KernelStack: 6584 kB' 'PageTables: 4432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.339 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.339 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.340 16:17:13 -- setup/common.sh@33 -- # echo 1024 00:05:39.340 16:17:13 -- setup/common.sh@33 -- # return 0 00:05:39.340 16:17:13 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:39.340 16:17:13 -- setup/hugepages.sh@112 -- # get_nodes 00:05:39.340 16:17:13 -- setup/hugepages.sh@27 -- # local node 00:05:39.340 16:17:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:39.340 16:17:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:39.340 16:17:13 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:39.340 16:17:13 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:39.340 16:17:13 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:39.340 16:17:13 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:39.340 16:17:13 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:39.340 16:17:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:39.340 16:17:13 -- setup/common.sh@18 -- # local node=0 00:05:39.340 16:17:13 -- setup/common.sh@19 -- # local var val 00:05:39.340 16:17:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:39.340 16:17:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.340 16:17:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:39.340 16:17:13 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:39.340 16:17:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.340 16:17:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.340 16:17:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7550008 kB' 'MemUsed: 4691968 kB' 'SwapCached: 0 kB' 'Active: 892216 kB' 'Inactive: 1367816 kB' 'Active(anon): 132868 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1367816 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 876 kB' 'Writeback: 0 kB' 'FilePages: 2137636 kB' 'Mapped: 48820 kB' 'AnonPages: 124068 kB' 'Shmem: 10464 kB' 'KernelStack: 6568 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70208 kB' 'Slab: 146712 kB' 'SReclaimable: 70208 kB' 'SUnreclaim: 76504 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.340 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.340 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.341 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.341 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.342 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.342 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.342 16:17:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.342 16:17:13 -- setup/common.sh@33 -- # echo 0 00:05:39.342 16:17:13 -- setup/common.sh@33 -- # return 0 00:05:39.342 16:17:13 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:39.342 16:17:13 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:39.342 16:17:13 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:39.342 16:17:13 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:39.342 node0=1024 expecting 1024 00:05:39.342 16:17:13 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:39.342 16:17:13 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:39.342 00:05:39.342 real 0m1.038s 00:05:39.342 user 0m0.465s 00:05:39.342 sys 0m0.511s 00:05:39.342 16:17:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:39.342 16:17:13 -- common/autotest_common.sh@10 -- # set +x 00:05:39.342 ************************************ 00:05:39.342 END TEST default_setup 00:05:39.342 ************************************ 00:05:39.342 16:17:13 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:39.342 16:17:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:39.342 16:17:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.342 16:17:13 -- common/autotest_common.sh@10 -- # set +x 00:05:39.342 ************************************ 00:05:39.342 START TEST per_node_1G_alloc 00:05:39.342 ************************************ 00:05:39.342 16:17:13 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:05:39.342 16:17:13 -- setup/hugepages.sh@143 -- # local IFS=, 00:05:39.342 16:17:13 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:39.342 16:17:13 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:39.342 16:17:13 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:39.342 16:17:13 -- setup/hugepages.sh@51 -- # shift 00:05:39.342 16:17:13 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:39.342 16:17:13 -- setup/hugepages.sh@52 -- # local node_ids 00:05:39.342 16:17:13 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:39.342 16:17:13 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:39.342 16:17:13 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:39.342 16:17:13 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:39.342 16:17:13 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:39.342 16:17:13 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:39.342 16:17:13 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:39.342 16:17:13 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:39.342 16:17:13 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:39.342 16:17:13 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:39.342 16:17:13 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:39.342 16:17:13 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:39.342 16:17:13 -- setup/hugepages.sh@73 -- # return 0 00:05:39.342 16:17:13 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:39.342 16:17:13 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:39.342 16:17:13 -- setup/hugepages.sh@146 -- # setup output 00:05:39.342 16:17:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:39.342 16:17:13 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:39.628 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:39.628 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:39.628 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:39.628 16:17:13 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:39.628 16:17:13 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:39.628 16:17:13 -- setup/hugepages.sh@89 -- # local node 00:05:39.628 16:17:13 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:39.628 16:17:13 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:39.628 16:17:13 -- setup/hugepages.sh@92 -- # local surp 00:05:39.628 16:17:13 -- setup/hugepages.sh@93 -- # local resv 00:05:39.628 16:17:13 -- setup/hugepages.sh@94 -- # local anon 00:05:39.628 16:17:13 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:39.628 16:17:13 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:39.628 16:17:13 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:39.628 16:17:13 -- setup/common.sh@18 -- # local node= 00:05:39.628 16:17:13 -- setup/common.sh@19 -- # local var val 00:05:39.628 16:17:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:39.628 16:17:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.628 16:17:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.628 16:17:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.628 16:17:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.628 16:17:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.628 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.628 16:17:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8594364 kB' 'MemAvailable: 10519752 kB' 'Buffers: 2436 kB' 'Cached: 2135236 kB' 'SwapCached: 0 kB' 'Active: 892508 kB' 'Inactive: 1367860 kB' 'Active(anon): 133160 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1367860 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1032 kB' 'Writeback: 0 kB' 'AnonPages: 124528 kB' 'Mapped: 48960 kB' 'Shmem: 10464 kB' 'KReclaimable: 70240 kB' 'Slab: 146896 kB' 'SReclaimable: 70240 kB' 'SUnreclaim: 76656 kB' 'KernelStack: 6532 kB' 'PageTables: 4444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 348084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:39.628 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.628 16:17:13 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.628 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.628 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.628 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.628 16:17:13 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.628 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.628 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.628 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.628 16:17:13 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.628 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.628 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.628 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.628 16:17:13 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.628 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.628 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.628 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.628 16:17:13 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.628 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.628 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.628 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.628 16:17:13 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.628 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.628 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.628 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.628 16:17:13 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.628 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.628 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.628 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.628 16:17:13 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.628 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.628 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.628 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.628 16:17:13 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.628 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.628 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.628 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.628 16:17:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.628 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.628 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.628 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.628 16:17:13 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.628 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.629 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.629 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.629 16:17:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.629 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.629 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.629 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.629 16:17:13 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.629 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.629 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.629 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.629 16:17:13 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.629 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.629 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.629 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.629 16:17:13 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.629 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.629 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.629 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.629 16:17:13 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.629 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.629 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.629 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.629 16:17:13 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.629 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.629 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.629 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.629 16:17:13 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.629 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.629 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.629 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.629 16:17:13 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.629 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.629 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.629 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.629 16:17:13 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.629 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.629 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.629 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.629 16:17:13 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.629 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.629 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.629 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.629 16:17:13 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.629 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.629 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.629 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.629 16:17:13 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.629 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.629 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.629 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.629 16:17:13 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.629 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.629 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.629 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.629 16:17:13 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.629 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.629 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.629 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.629 16:17:13 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.629 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.629 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.629 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.629 16:17:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.629 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.629 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.629 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.629 16:17:13 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.629 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.629 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.629 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.629 16:17:13 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.629 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.629 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.629 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.629 16:17:13 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.629 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.629 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.629 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.629 16:17:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.629 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.629 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.629 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.629 16:17:13 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.629 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.629 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.629 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.891 16:17:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.891 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.891 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.891 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.891 16:17:13 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.891 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.891 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.891 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.891 16:17:13 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.891 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.891 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.891 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.891 16:17:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.891 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.891 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.891 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.891 16:17:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.891 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.891 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.891 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.891 16:17:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.891 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.891 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.891 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.891 16:17:13 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.891 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.891 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.891 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.891 16:17:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.891 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.891 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.891 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.891 16:17:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.891 16:17:13 -- setup/common.sh@33 -- # echo 0 00:05:39.891 16:17:13 -- setup/common.sh@33 -- # return 0 00:05:39.891 16:17:13 -- setup/hugepages.sh@97 -- # anon=0 00:05:39.891 16:17:13 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:39.891 16:17:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:39.891 16:17:13 -- setup/common.sh@18 -- # local node= 00:05:39.891 16:17:13 -- setup/common.sh@19 -- # local var val 00:05:39.891 16:17:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:39.891 16:17:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.891 16:17:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.891 16:17:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.891 16:17:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.891 16:17:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.891 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.891 16:17:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8594364 kB' 'MemAvailable: 10519752 kB' 'Buffers: 2436 kB' 'Cached: 2135236 kB' 'SwapCached: 0 kB' 'Active: 892272 kB' 'Inactive: 1367860 kB' 'Active(anon): 132924 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1367860 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1032 kB' 'Writeback: 0 kB' 'AnonPages: 124296 kB' 'Mapped: 48832 kB' 'Shmem: 10464 kB' 'KReclaimable: 70240 kB' 'Slab: 146896 kB' 'SReclaimable: 70240 kB' 'SUnreclaim: 76656 kB' 'KernelStack: 6560 kB' 'PageTables: 4440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 348084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:39.891 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.891 16:17:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.891 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.891 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.891 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.891 16:17:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.891 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.891 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.891 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.891 16:17:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.891 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.891 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.891 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.891 16:17:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.891 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.891 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.891 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.891 16:17:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.891 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.891 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.891 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.891 16:17:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.891 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.891 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.891 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.891 16:17:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.891 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.891 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.891 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.891 16:17:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.892 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.892 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.893 16:17:13 -- setup/common.sh@33 -- # echo 0 00:05:39.893 16:17:13 -- setup/common.sh@33 -- # return 0 00:05:39.893 16:17:13 -- setup/hugepages.sh@99 -- # surp=0 00:05:39.893 16:17:13 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:39.893 16:17:13 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:39.893 16:17:13 -- setup/common.sh@18 -- # local node= 00:05:39.893 16:17:13 -- setup/common.sh@19 -- # local var val 00:05:39.893 16:17:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:39.893 16:17:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.893 16:17:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.893 16:17:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.893 16:17:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.893 16:17:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.893 16:17:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8594364 kB' 'MemAvailable: 10519752 kB' 'Buffers: 2436 kB' 'Cached: 2135236 kB' 'SwapCached: 0 kB' 'Active: 892316 kB' 'Inactive: 1367860 kB' 'Active(anon): 132968 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1367860 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1032 kB' 'Writeback: 0 kB' 'AnonPages: 124336 kB' 'Mapped: 48832 kB' 'Shmem: 10464 kB' 'KReclaimable: 70240 kB' 'Slab: 146896 kB' 'SReclaimable: 70240 kB' 'SUnreclaim: 76656 kB' 'KernelStack: 6560 kB' 'PageTables: 4440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 348084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.893 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.893 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.894 16:17:13 -- setup/common.sh@33 -- # echo 0 00:05:39.894 16:17:13 -- setup/common.sh@33 -- # return 0 00:05:39.894 16:17:13 -- setup/hugepages.sh@100 -- # resv=0 00:05:39.894 nr_hugepages=512 00:05:39.894 16:17:13 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:39.894 resv_hugepages=0 00:05:39.894 16:17:13 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:39.894 surplus_hugepages=0 00:05:39.894 16:17:13 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:39.894 anon_hugepages=0 00:05:39.894 16:17:13 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:39.894 16:17:13 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:39.894 16:17:13 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:39.894 16:17:13 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:39.894 16:17:13 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:39.894 16:17:13 -- setup/common.sh@18 -- # local node= 00:05:39.894 16:17:13 -- setup/common.sh@19 -- # local var val 00:05:39.894 16:17:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:39.894 16:17:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.894 16:17:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.894 16:17:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.894 16:17:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.894 16:17:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.894 16:17:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8594732 kB' 'MemAvailable: 10520120 kB' 'Buffers: 2436 kB' 'Cached: 2135236 kB' 'SwapCached: 0 kB' 'Active: 892208 kB' 'Inactive: 1367860 kB' 'Active(anon): 132860 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1367860 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1032 kB' 'Writeback: 0 kB' 'AnonPages: 124276 kB' 'Mapped: 48832 kB' 'Shmem: 10464 kB' 'KReclaimable: 70240 kB' 'Slab: 146896 kB' 'SReclaimable: 70240 kB' 'SUnreclaim: 76656 kB' 'KernelStack: 6576 kB' 'PageTables: 4496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 348084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.894 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.894 16:17:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.895 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.895 16:17:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.895 16:17:13 -- setup/common.sh@33 -- # echo 512 00:05:39.895 16:17:13 -- setup/common.sh@33 -- # return 0 00:05:39.895 16:17:13 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:39.895 16:17:13 -- setup/hugepages.sh@112 -- # get_nodes 00:05:39.895 16:17:13 -- setup/hugepages.sh@27 -- # local node 00:05:39.895 16:17:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:39.895 16:17:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:39.895 16:17:13 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:39.895 16:17:13 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:39.895 16:17:13 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:39.895 16:17:13 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:39.896 16:17:13 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:39.896 16:17:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:39.896 16:17:13 -- setup/common.sh@18 -- # local node=0 00:05:39.896 16:17:13 -- setup/common.sh@19 -- # local var val 00:05:39.896 16:17:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:39.896 16:17:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.896 16:17:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:39.896 16:17:13 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:39.896 16:17:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.896 16:17:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.896 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.896 16:17:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8594732 kB' 'MemUsed: 3647244 kB' 'SwapCached: 0 kB' 'Active: 892236 kB' 'Inactive: 1367860 kB' 'Active(anon): 132888 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1367860 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1032 kB' 'Writeback: 0 kB' 'FilePages: 2137672 kB' 'Mapped: 48832 kB' 'AnonPages: 124252 kB' 'Shmem: 10464 kB' 'KernelStack: 6560 kB' 'PageTables: 4440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70240 kB' 'Slab: 146892 kB' 'SReclaimable: 70240 kB' 'SUnreclaim: 76652 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:39.896 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.896 16:17:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.896 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.896 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.896 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.896 16:17:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.896 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.896 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.896 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.896 16:17:13 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.896 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.896 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.896 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.896 16:17:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.896 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.896 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.896 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.896 16:17:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.896 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.896 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.896 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.896 16:17:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.896 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.896 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.896 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.896 16:17:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.896 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.896 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.896 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.896 16:17:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.896 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.896 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.896 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.896 16:17:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.896 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.896 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.896 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.896 16:17:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.896 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.896 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.896 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.896 16:17:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.896 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.896 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.896 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.896 16:17:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.896 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.896 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.896 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.896 16:17:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.896 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.896 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.896 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.896 16:17:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.896 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.896 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.896 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.896 16:17:13 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.896 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.896 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.896 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.896 16:17:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.896 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.896 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.896 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.896 16:17:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.896 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.896 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.896 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.896 16:17:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.896 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.896 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.896 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.896 16:17:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.896 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.896 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.896 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.896 16:17:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.896 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.897 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.897 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.897 16:17:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.897 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.897 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.897 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.897 16:17:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.897 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.897 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.897 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.897 16:17:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.897 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.897 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.897 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.897 16:17:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.897 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.897 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.897 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.897 16:17:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.897 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.897 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.897 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.897 16:17:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.897 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.897 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.897 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.897 16:17:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.897 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.897 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.897 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.897 16:17:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.897 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.897 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.897 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.897 16:17:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.897 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.897 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.897 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.897 16:17:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.897 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.897 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.897 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.897 16:17:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.897 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.897 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.897 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.897 16:17:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.897 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.897 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.897 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.897 16:17:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.897 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.897 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.897 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.897 16:17:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.897 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.897 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.897 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.897 16:17:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.897 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.897 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.897 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.897 16:17:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.897 16:17:13 -- setup/common.sh@32 -- # continue 00:05:39.897 16:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.897 16:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.897 16:17:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.897 16:17:13 -- setup/common.sh@33 -- # echo 0 00:05:39.897 16:17:13 -- setup/common.sh@33 -- # return 0 00:05:39.897 16:17:13 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:39.897 16:17:13 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:39.897 16:17:13 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:39.897 16:17:13 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:39.897 16:17:13 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:39.897 node0=512 expecting 512 00:05:39.897 16:17:13 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:39.897 00:05:39.897 real 0m0.507s 00:05:39.897 user 0m0.251s 00:05:39.897 sys 0m0.289s 00:05:39.897 16:17:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:39.897 16:17:13 -- common/autotest_common.sh@10 -- # set +x 00:05:39.897 ************************************ 00:05:39.897 END TEST per_node_1G_alloc 00:05:39.897 ************************************ 00:05:39.897 16:17:13 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:39.897 16:17:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:39.897 16:17:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.897 16:17:13 -- common/autotest_common.sh@10 -- # set +x 00:05:39.897 ************************************ 00:05:39.897 START TEST even_2G_alloc 00:05:39.897 ************************************ 00:05:39.897 16:17:13 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:05:39.897 16:17:13 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:39.897 16:17:13 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:39.897 16:17:13 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:39.897 16:17:13 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:39.897 16:17:13 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:39.897 16:17:13 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:39.897 16:17:13 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:39.897 16:17:13 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:39.897 16:17:13 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:39.897 16:17:13 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:39.897 16:17:13 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:39.897 16:17:13 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:39.897 16:17:13 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:39.897 16:17:13 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:39.897 16:17:13 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:39.897 16:17:13 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:39.897 16:17:13 -- setup/hugepages.sh@83 -- # : 0 00:05:39.897 16:17:13 -- setup/hugepages.sh@84 -- # : 0 00:05:39.897 16:17:13 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:39.897 16:17:13 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:39.897 16:17:13 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:39.897 16:17:13 -- setup/hugepages.sh@153 -- # setup output 00:05:39.897 16:17:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:39.897 16:17:13 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:40.467 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:40.467 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:40.467 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:40.467 16:17:14 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:40.467 16:17:14 -- setup/hugepages.sh@89 -- # local node 00:05:40.467 16:17:14 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:40.467 16:17:14 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:40.467 16:17:14 -- setup/hugepages.sh@92 -- # local surp 00:05:40.467 16:17:14 -- setup/hugepages.sh@93 -- # local resv 00:05:40.467 16:17:14 -- setup/hugepages.sh@94 -- # local anon 00:05:40.467 16:17:14 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:40.467 16:17:14 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:40.467 16:17:14 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:40.467 16:17:14 -- setup/common.sh@18 -- # local node= 00:05:40.467 16:17:14 -- setup/common.sh@19 -- # local var val 00:05:40.467 16:17:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:40.467 16:17:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.467 16:17:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.467 16:17:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.467 16:17:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.467 16:17:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.467 16:17:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7550492 kB' 'MemAvailable: 9475884 kB' 'Buffers: 2436 kB' 'Cached: 2135240 kB' 'SwapCached: 0 kB' 'Active: 892916 kB' 'Inactive: 1367864 kB' 'Active(anon): 133568 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1367864 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1204 kB' 'Writeback: 0 kB' 'AnonPages: 124732 kB' 'Mapped: 48920 kB' 'Shmem: 10464 kB' 'KReclaimable: 70240 kB' 'Slab: 146992 kB' 'SReclaimable: 70240 kB' 'SUnreclaim: 76752 kB' 'KernelStack: 6564 kB' 'PageTables: 4548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 347776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.467 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.467 16:17:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.468 16:17:14 -- setup/common.sh@33 -- # echo 0 00:05:40.468 16:17:14 -- setup/common.sh@33 -- # return 0 00:05:40.468 16:17:14 -- setup/hugepages.sh@97 -- # anon=0 00:05:40.468 16:17:14 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:40.468 16:17:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:40.468 16:17:14 -- setup/common.sh@18 -- # local node= 00:05:40.468 16:17:14 -- setup/common.sh@19 -- # local var val 00:05:40.468 16:17:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:40.468 16:17:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.468 16:17:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.468 16:17:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.468 16:17:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.468 16:17:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7550240 kB' 'MemAvailable: 9475632 kB' 'Buffers: 2436 kB' 'Cached: 2135240 kB' 'SwapCached: 0 kB' 'Active: 892444 kB' 'Inactive: 1367864 kB' 'Active(anon): 133096 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1367864 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1204 kB' 'Writeback: 0 kB' 'AnonPages: 124204 kB' 'Mapped: 48844 kB' 'Shmem: 10464 kB' 'KReclaimable: 70240 kB' 'Slab: 146988 kB' 'SReclaimable: 70240 kB' 'SUnreclaim: 76748 kB' 'KernelStack: 6576 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 347776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.468 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.468 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.469 16:17:14 -- setup/common.sh@33 -- # echo 0 00:05:40.469 16:17:14 -- setup/common.sh@33 -- # return 0 00:05:40.469 16:17:14 -- setup/hugepages.sh@99 -- # surp=0 00:05:40.469 16:17:14 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:40.469 16:17:14 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:40.469 16:17:14 -- setup/common.sh@18 -- # local node= 00:05:40.469 16:17:14 -- setup/common.sh@19 -- # local var val 00:05:40.469 16:17:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:40.469 16:17:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.469 16:17:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.469 16:17:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.469 16:17:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.469 16:17:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7550240 kB' 'MemAvailable: 9475632 kB' 'Buffers: 2436 kB' 'Cached: 2135240 kB' 'SwapCached: 0 kB' 'Active: 892496 kB' 'Inactive: 1367864 kB' 'Active(anon): 133148 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1367864 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1204 kB' 'Writeback: 0 kB' 'AnonPages: 124256 kB' 'Mapped: 48844 kB' 'Shmem: 10464 kB' 'KReclaimable: 70240 kB' 'Slab: 146988 kB' 'SReclaimable: 70240 kB' 'SUnreclaim: 76748 kB' 'KernelStack: 6560 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 347776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.469 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.469 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.470 16:17:14 -- setup/common.sh@33 -- # echo 0 00:05:40.470 16:17:14 -- setup/common.sh@33 -- # return 0 00:05:40.470 16:17:14 -- setup/hugepages.sh@100 -- # resv=0 00:05:40.470 16:17:14 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:40.470 nr_hugepages=1024 00:05:40.470 resv_hugepages=0 00:05:40.470 16:17:14 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:40.470 surplus_hugepages=0 00:05:40.470 16:17:14 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:40.470 anon_hugepages=0 00:05:40.470 16:17:14 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:40.470 16:17:14 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:40.470 16:17:14 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:40.470 16:17:14 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:40.470 16:17:14 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:40.470 16:17:14 -- setup/common.sh@18 -- # local node= 00:05:40.470 16:17:14 -- setup/common.sh@19 -- # local var val 00:05:40.470 16:17:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:40.470 16:17:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.470 16:17:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.470 16:17:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.470 16:17:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.470 16:17:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.470 16:17:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7550592 kB' 'MemAvailable: 9475984 kB' 'Buffers: 2436 kB' 'Cached: 2135240 kB' 'SwapCached: 0 kB' 'Active: 892488 kB' 'Inactive: 1367864 kB' 'Active(anon): 133140 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1367864 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1204 kB' 'Writeback: 0 kB' 'AnonPages: 124248 kB' 'Mapped: 48844 kB' 'Shmem: 10464 kB' 'KReclaimable: 70240 kB' 'Slab: 146988 kB' 'SReclaimable: 70240 kB' 'SUnreclaim: 76748 kB' 'KernelStack: 6576 kB' 'PageTables: 4492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 347776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.470 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.470 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.471 16:17:14 -- setup/common.sh@33 -- # echo 1024 00:05:40.471 16:17:14 -- setup/common.sh@33 -- # return 0 00:05:40.471 16:17:14 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:40.471 16:17:14 -- setup/hugepages.sh@112 -- # get_nodes 00:05:40.471 16:17:14 -- setup/hugepages.sh@27 -- # local node 00:05:40.471 16:17:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:40.471 16:17:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:40.471 16:17:14 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:40.471 16:17:14 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:40.471 16:17:14 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:40.471 16:17:14 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:40.471 16:17:14 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:40.471 16:17:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:40.471 16:17:14 -- setup/common.sh@18 -- # local node=0 00:05:40.471 16:17:14 -- setup/common.sh@19 -- # local var val 00:05:40.471 16:17:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:40.471 16:17:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.471 16:17:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:40.471 16:17:14 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:40.471 16:17:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.471 16:17:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7551256 kB' 'MemUsed: 4690720 kB' 'SwapCached: 0 kB' 'Active: 892388 kB' 'Inactive: 1367864 kB' 'Active(anon): 133040 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1367864 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1204 kB' 'Writeback: 0 kB' 'FilePages: 2137676 kB' 'Mapped: 48844 kB' 'AnonPages: 124148 kB' 'Shmem: 10464 kB' 'KernelStack: 6544 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70240 kB' 'Slab: 146988 kB' 'SReclaimable: 70240 kB' 'SUnreclaim: 76748 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.471 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.471 16:17:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.471 16:17:14 -- setup/common.sh@33 -- # echo 0 00:05:40.471 16:17:14 -- setup/common.sh@33 -- # return 0 00:05:40.472 16:17:14 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:40.472 16:17:14 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:40.472 16:17:14 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:40.472 16:17:14 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:40.472 node0=1024 expecting 1024 00:05:40.472 16:17:14 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:40.472 16:17:14 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:40.472 00:05:40.472 real 0m0.533s 00:05:40.472 user 0m0.262s 00:05:40.472 sys 0m0.306s 00:05:40.472 16:17:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:40.472 16:17:14 -- common/autotest_common.sh@10 -- # set +x 00:05:40.472 ************************************ 00:05:40.472 END TEST even_2G_alloc 00:05:40.472 ************************************ 00:05:40.472 16:17:14 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:40.472 16:17:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:40.472 16:17:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.472 16:17:14 -- common/autotest_common.sh@10 -- # set +x 00:05:40.730 ************************************ 00:05:40.730 START TEST odd_alloc 00:05:40.730 ************************************ 00:05:40.730 16:17:14 -- common/autotest_common.sh@1111 -- # odd_alloc 00:05:40.730 16:17:14 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:40.730 16:17:14 -- setup/hugepages.sh@49 -- # local size=2098176 00:05:40.730 16:17:14 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:40.730 16:17:14 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:40.730 16:17:14 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:40.730 16:17:14 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:40.730 16:17:14 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:40.730 16:17:14 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:40.730 16:17:14 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:40.730 16:17:14 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:40.730 16:17:14 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:40.730 16:17:14 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:40.730 16:17:14 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:40.730 16:17:14 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:40.730 16:17:14 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:40.730 16:17:14 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:40.730 16:17:14 -- setup/hugepages.sh@83 -- # : 0 00:05:40.730 16:17:14 -- setup/hugepages.sh@84 -- # : 0 00:05:40.730 16:17:14 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:40.730 16:17:14 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:40.730 16:17:14 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:40.730 16:17:14 -- setup/hugepages.sh@160 -- # setup output 00:05:40.730 16:17:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:40.730 16:17:14 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:40.991 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:40.991 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:40.991 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:40.991 16:17:14 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:40.991 16:17:14 -- setup/hugepages.sh@89 -- # local node 00:05:40.991 16:17:14 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:40.991 16:17:14 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:40.991 16:17:14 -- setup/hugepages.sh@92 -- # local surp 00:05:40.991 16:17:14 -- setup/hugepages.sh@93 -- # local resv 00:05:40.991 16:17:14 -- setup/hugepages.sh@94 -- # local anon 00:05:40.991 16:17:14 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:40.991 16:17:14 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:40.991 16:17:14 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:40.991 16:17:14 -- setup/common.sh@18 -- # local node= 00:05:40.991 16:17:14 -- setup/common.sh@19 -- # local var val 00:05:40.991 16:17:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:40.991 16:17:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.991 16:17:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.991 16:17:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.991 16:17:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.991 16:17:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.991 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.991 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.991 16:17:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7547396 kB' 'MemAvailable: 9472788 kB' 'Buffers: 2436 kB' 'Cached: 2135240 kB' 'SwapCached: 0 kB' 'Active: 892536 kB' 'Inactive: 1367864 kB' 'Active(anon): 133188 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1367864 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1352 kB' 'Writeback: 0 kB' 'AnonPages: 124596 kB' 'Mapped: 48984 kB' 'Shmem: 10464 kB' 'KReclaimable: 70240 kB' 'Slab: 147104 kB' 'SReclaimable: 70240 kB' 'SUnreclaim: 76864 kB' 'KernelStack: 6548 kB' 'PageTables: 4500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 347912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:40.991 16:17:14 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.991 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.991 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.991 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.991 16:17:14 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.991 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.991 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.991 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.991 16:17:14 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.991 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.991 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.991 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.991 16:17:14 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.991 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.991 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.991 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.991 16:17:14 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.991 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.991 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.991 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.991 16:17:14 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.991 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.991 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.991 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.991 16:17:14 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.991 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.991 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.991 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.991 16:17:14 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.991 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.991 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.991 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.991 16:17:14 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.991 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.991 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.991 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.991 16:17:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.991 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.991 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.991 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.991 16:17:14 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.991 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.991 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.991 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.991 16:17:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.991 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.991 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.991 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.991 16:17:14 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.991 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.991 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.991 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.991 16:17:14 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.991 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.991 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.991 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.991 16:17:14 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.991 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.991 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.991 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.991 16:17:14 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.991 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.991 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.991 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.991 16:17:14 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.991 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.991 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.991 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.991 16:17:14 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.991 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.991 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.991 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.991 16:17:14 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.991 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.991 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.991 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.991 16:17:14 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.991 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.991 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.992 16:17:14 -- setup/common.sh@33 -- # echo 0 00:05:40.992 16:17:14 -- setup/common.sh@33 -- # return 0 00:05:40.992 16:17:14 -- setup/hugepages.sh@97 -- # anon=0 00:05:40.992 16:17:14 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:40.992 16:17:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:40.992 16:17:14 -- setup/common.sh@18 -- # local node= 00:05:40.992 16:17:14 -- setup/common.sh@19 -- # local var val 00:05:40.992 16:17:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:40.992 16:17:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.992 16:17:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.992 16:17:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.992 16:17:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.992 16:17:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.992 16:17:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7547396 kB' 'MemAvailable: 9472788 kB' 'Buffers: 2436 kB' 'Cached: 2135240 kB' 'SwapCached: 0 kB' 'Active: 892484 kB' 'Inactive: 1367864 kB' 'Active(anon): 133136 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1367864 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1352 kB' 'Writeback: 0 kB' 'AnonPages: 124268 kB' 'Mapped: 48856 kB' 'Shmem: 10464 kB' 'KReclaimable: 70240 kB' 'Slab: 147100 kB' 'SReclaimable: 70240 kB' 'SUnreclaim: 76860 kB' 'KernelStack: 6576 kB' 'PageTables: 4488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 347912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.992 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.992 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.993 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.993 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.994 16:17:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.994 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.994 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.994 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.994 16:17:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.994 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.994 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.994 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.994 16:17:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.994 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.994 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.994 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.994 16:17:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.994 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.994 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.994 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.994 16:17:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.994 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.994 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.994 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.994 16:17:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.994 16:17:14 -- setup/common.sh@32 -- # continue 00:05:40.994 16:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.994 16:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.994 16:17:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.994 16:17:15 -- setup/common.sh@32 -- # continue 00:05:40.994 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.994 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.994 16:17:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.994 16:17:15 -- setup/common.sh@32 -- # continue 00:05:40.994 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.994 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.994 16:17:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.994 16:17:15 -- setup/common.sh@32 -- # continue 00:05:40.994 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.994 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.994 16:17:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.994 16:17:15 -- setup/common.sh@33 -- # echo 0 00:05:40.994 16:17:15 -- setup/common.sh@33 -- # return 0 00:05:40.994 16:17:15 -- setup/hugepages.sh@99 -- # surp=0 00:05:40.994 16:17:15 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:40.994 16:17:15 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:40.994 16:17:15 -- setup/common.sh@18 -- # local node= 00:05:40.994 16:17:15 -- setup/common.sh@19 -- # local var val 00:05:40.994 16:17:15 -- setup/common.sh@20 -- # local mem_f mem 00:05:40.994 16:17:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.994 16:17:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.994 16:17:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.994 16:17:15 -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.994 16:17:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.994 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.994 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.994 16:17:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7547648 kB' 'MemAvailable: 9473040 kB' 'Buffers: 2436 kB' 'Cached: 2135240 kB' 'SwapCached: 0 kB' 'Active: 892232 kB' 'Inactive: 1367864 kB' 'Active(anon): 132884 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1367864 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1352 kB' 'Writeback: 0 kB' 'AnonPages: 124036 kB' 'Mapped: 48856 kB' 'Shmem: 10464 kB' 'KReclaimable: 70240 kB' 'Slab: 147096 kB' 'SReclaimable: 70240 kB' 'SUnreclaim: 76856 kB' 'KernelStack: 6544 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 347680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:40.994 16:17:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.994 16:17:15 -- setup/common.sh@32 -- # continue 00:05:40.994 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.994 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.994 16:17:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.994 16:17:15 -- setup/common.sh@32 -- # continue 00:05:40.994 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.994 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.994 16:17:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.994 16:17:15 -- setup/common.sh@32 -- # continue 00:05:40.994 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.994 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.994 16:17:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.994 16:17:15 -- setup/common.sh@32 -- # continue 00:05:40.994 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.994 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.994 16:17:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.994 16:17:15 -- setup/common.sh@32 -- # continue 00:05:40.994 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.994 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.994 16:17:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.994 16:17:15 -- setup/common.sh@32 -- # continue 00:05:40.994 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.994 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.994 16:17:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.994 16:17:15 -- setup/common.sh@32 -- # continue 00:05:40.994 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.994 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.994 16:17:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.994 16:17:15 -- setup/common.sh@32 -- # continue 00:05:40.994 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.994 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.994 16:17:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.994 16:17:15 -- setup/common.sh@32 -- # continue 00:05:40.994 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.994 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.994 16:17:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.994 16:17:15 -- setup/common.sh@32 -- # continue 00:05:40.994 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.994 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.994 16:17:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.994 16:17:15 -- setup/common.sh@32 -- # continue 00:05:40.994 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.994 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.994 16:17:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.994 16:17:15 -- setup/common.sh@32 -- # continue 00:05:40.994 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.994 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.994 16:17:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.994 16:17:15 -- setup/common.sh@32 -- # continue 00:05:40.994 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.994 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.994 16:17:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.994 16:17:15 -- setup/common.sh@32 -- # continue 00:05:40.994 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.994 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.994 16:17:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.994 16:17:15 -- setup/common.sh@32 -- # continue 00:05:40.994 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.994 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.994 16:17:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.994 16:17:15 -- setup/common.sh@32 -- # continue 00:05:40.994 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.994 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.994 16:17:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.994 16:17:15 -- setup/common.sh@32 -- # continue 00:05:40.994 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.994 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.995 16:17:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.995 16:17:15 -- setup/common.sh@32 -- # continue 00:05:40.995 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.995 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.995 16:17:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.995 16:17:15 -- setup/common.sh@32 -- # continue 00:05:40.995 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.995 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.995 16:17:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.995 16:17:15 -- setup/common.sh@32 -- # continue 00:05:40.995 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.995 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.995 16:17:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.995 16:17:15 -- setup/common.sh@32 -- # continue 00:05:40.995 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.995 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.995 16:17:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.995 16:17:15 -- setup/common.sh@32 -- # continue 00:05:40.995 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.995 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.995 16:17:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.995 16:17:15 -- setup/common.sh@32 -- # continue 00:05:40.995 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.995 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.995 16:17:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.995 16:17:15 -- setup/common.sh@32 -- # continue 00:05:40.995 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.995 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.995 16:17:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.995 16:17:15 -- setup/common.sh@32 -- # continue 00:05:40.995 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.995 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.995 16:17:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.995 16:17:15 -- setup/common.sh@32 -- # continue 00:05:40.995 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.995 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.995 16:17:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.995 16:17:15 -- setup/common.sh@32 -- # continue 00:05:40.995 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.995 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.995 16:17:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.995 16:17:15 -- setup/common.sh@32 -- # continue 00:05:40.995 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.995 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.995 16:17:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.995 16:17:15 -- setup/common.sh@32 -- # continue 00:05:40.995 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.995 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.995 16:17:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.995 16:17:15 -- setup/common.sh@32 -- # continue 00:05:40.995 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.995 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.995 16:17:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.995 16:17:15 -- setup/common.sh@32 -- # continue 00:05:40.995 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.995 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.995 16:17:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.995 16:17:15 -- setup/common.sh@32 -- # continue 00:05:40.995 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.995 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.995 16:17:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.995 16:17:15 -- setup/common.sh@32 -- # continue 00:05:40.995 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.995 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.995 16:17:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.995 16:17:15 -- setup/common.sh@32 -- # continue 00:05:40.995 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.995 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.995 16:17:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.995 16:17:15 -- setup/common.sh@32 -- # continue 00:05:40.995 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.995 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.995 16:17:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.995 16:17:15 -- setup/common.sh@32 -- # continue 00:05:40.995 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.995 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.995 16:17:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.995 16:17:15 -- setup/common.sh@32 -- # continue 00:05:40.995 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.995 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.995 16:17:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.995 16:17:15 -- setup/common.sh@32 -- # continue 00:05:40.995 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.995 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.995 16:17:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.995 16:17:15 -- setup/common.sh@32 -- # continue 00:05:40.995 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.995 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.255 16:17:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.255 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.255 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.255 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.255 16:17:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.255 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.255 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.255 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.255 16:17:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.255 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.255 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.255 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.255 16:17:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.255 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.255 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.255 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.255 16:17:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.255 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.255 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.255 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.255 16:17:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.255 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.255 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.255 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.255 16:17:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.255 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.255 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.255 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.255 16:17:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.255 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.255 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.255 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.255 16:17:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.255 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.255 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.255 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.255 16:17:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.255 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.255 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.255 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.255 16:17:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.255 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.255 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.255 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.255 16:17:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.255 16:17:15 -- setup/common.sh@33 -- # echo 0 00:05:41.255 16:17:15 -- setup/common.sh@33 -- # return 0 00:05:41.255 16:17:15 -- setup/hugepages.sh@100 -- # resv=0 00:05:41.255 nr_hugepages=1025 00:05:41.255 16:17:15 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:41.255 resv_hugepages=0 00:05:41.255 16:17:15 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:41.255 surplus_hugepages=0 00:05:41.255 16:17:15 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:41.255 anon_hugepages=0 00:05:41.255 16:17:15 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:41.255 16:17:15 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:41.255 16:17:15 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:41.255 16:17:15 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:41.255 16:17:15 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:41.255 16:17:15 -- setup/common.sh@18 -- # local node= 00:05:41.255 16:17:15 -- setup/common.sh@19 -- # local var val 00:05:41.255 16:17:15 -- setup/common.sh@20 -- # local mem_f mem 00:05:41.255 16:17:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.255 16:17:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.255 16:17:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.255 16:17:15 -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.255 16:17:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.256 16:17:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7547648 kB' 'MemAvailable: 9473044 kB' 'Buffers: 2436 kB' 'Cached: 2135244 kB' 'SwapCached: 0 kB' 'Active: 892412 kB' 'Inactive: 1367868 kB' 'Active(anon): 133064 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1367868 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1352 kB' 'Writeback: 0 kB' 'AnonPages: 124200 kB' 'Mapped: 48856 kB' 'Shmem: 10464 kB' 'KReclaimable: 70240 kB' 'Slab: 147096 kB' 'SReclaimable: 70240 kB' 'SUnreclaim: 76856 kB' 'KernelStack: 6560 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 348044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.256 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.256 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.257 16:17:15 -- setup/common.sh@33 -- # echo 1025 00:05:41.257 16:17:15 -- setup/common.sh@33 -- # return 0 00:05:41.257 16:17:15 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:41.257 16:17:15 -- setup/hugepages.sh@112 -- # get_nodes 00:05:41.257 16:17:15 -- setup/hugepages.sh@27 -- # local node 00:05:41.257 16:17:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:41.257 16:17:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:41.257 16:17:15 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:41.257 16:17:15 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:41.257 16:17:15 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:41.257 16:17:15 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:41.257 16:17:15 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:41.257 16:17:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:41.257 16:17:15 -- setup/common.sh@18 -- # local node=0 00:05:41.257 16:17:15 -- setup/common.sh@19 -- # local var val 00:05:41.257 16:17:15 -- setup/common.sh@20 -- # local mem_f mem 00:05:41.257 16:17:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.257 16:17:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:41.257 16:17:15 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:41.257 16:17:15 -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.257 16:17:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.257 16:17:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7547984 kB' 'MemUsed: 4693992 kB' 'SwapCached: 0 kB' 'Active: 892412 kB' 'Inactive: 1367868 kB' 'Active(anon): 133064 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1367868 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1352 kB' 'Writeback: 0 kB' 'FilePages: 2137680 kB' 'Mapped: 48856 kB' 'AnonPages: 124240 kB' 'Shmem: 10464 kB' 'KernelStack: 6560 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70240 kB' 'Slab: 147096 kB' 'SReclaimable: 70240 kB' 'SUnreclaim: 76856 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.257 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.257 16:17:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.258 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.258 16:17:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.258 16:17:15 -- setup/common.sh@33 -- # echo 0 00:05:41.258 16:17:15 -- setup/common.sh@33 -- # return 0 00:05:41.258 16:17:15 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:41.258 16:17:15 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:41.258 16:17:15 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:41.258 16:17:15 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:41.258 node0=1025 expecting 1025 00:05:41.258 16:17:15 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:41.258 16:17:15 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:41.258 00:05:41.258 real 0m0.536s 00:05:41.258 user 0m0.280s 00:05:41.258 sys 0m0.289s 00:05:41.258 16:17:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:41.258 16:17:15 -- common/autotest_common.sh@10 -- # set +x 00:05:41.258 ************************************ 00:05:41.258 END TEST odd_alloc 00:05:41.258 ************************************ 00:05:41.258 16:17:15 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:41.258 16:17:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:41.258 16:17:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.258 16:17:15 -- common/autotest_common.sh@10 -- # set +x 00:05:41.258 ************************************ 00:05:41.258 START TEST custom_alloc 00:05:41.258 ************************************ 00:05:41.258 16:17:15 -- common/autotest_common.sh@1111 -- # custom_alloc 00:05:41.258 16:17:15 -- setup/hugepages.sh@167 -- # local IFS=, 00:05:41.258 16:17:15 -- setup/hugepages.sh@169 -- # local node 00:05:41.258 16:17:15 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:41.258 16:17:15 -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:41.258 16:17:15 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:41.258 16:17:15 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:41.258 16:17:15 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:41.258 16:17:15 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:41.258 16:17:15 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:41.258 16:17:15 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:41.258 16:17:15 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:41.258 16:17:15 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:41.258 16:17:15 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:41.258 16:17:15 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:41.258 16:17:15 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:41.258 16:17:15 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:41.258 16:17:15 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:41.258 16:17:15 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:41.258 16:17:15 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:41.259 16:17:15 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:41.259 16:17:15 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:41.259 16:17:15 -- setup/hugepages.sh@83 -- # : 0 00:05:41.259 16:17:15 -- setup/hugepages.sh@84 -- # : 0 00:05:41.259 16:17:15 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:41.259 16:17:15 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:41.259 16:17:15 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:41.259 16:17:15 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:41.259 16:17:15 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:41.259 16:17:15 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:41.259 16:17:15 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:41.259 16:17:15 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:41.259 16:17:15 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:41.259 16:17:15 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:41.259 16:17:15 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:41.259 16:17:15 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:41.259 16:17:15 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:41.259 16:17:15 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:41.259 16:17:15 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:41.259 16:17:15 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:41.259 16:17:15 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:41.259 16:17:15 -- setup/hugepages.sh@78 -- # return 0 00:05:41.259 16:17:15 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:41.259 16:17:15 -- setup/hugepages.sh@187 -- # setup output 00:05:41.259 16:17:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:41.259 16:17:15 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:41.517 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:41.780 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:41.780 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:41.780 16:17:15 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:41.780 16:17:15 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:41.780 16:17:15 -- setup/hugepages.sh@89 -- # local node 00:05:41.780 16:17:15 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:41.780 16:17:15 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:41.780 16:17:15 -- setup/hugepages.sh@92 -- # local surp 00:05:41.780 16:17:15 -- setup/hugepages.sh@93 -- # local resv 00:05:41.780 16:17:15 -- setup/hugepages.sh@94 -- # local anon 00:05:41.780 16:17:15 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:41.780 16:17:15 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:41.780 16:17:15 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:41.780 16:17:15 -- setup/common.sh@18 -- # local node= 00:05:41.780 16:17:15 -- setup/common.sh@19 -- # local var val 00:05:41.780 16:17:15 -- setup/common.sh@20 -- # local mem_f mem 00:05:41.780 16:17:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.780 16:17:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.780 16:17:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.780 16:17:15 -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.780 16:17:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.780 16:17:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8598748 kB' 'MemAvailable: 10524148 kB' 'Buffers: 2436 kB' 'Cached: 2135248 kB' 'SwapCached: 0 kB' 'Active: 892912 kB' 'Inactive: 1367872 kB' 'Active(anon): 133564 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1367872 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1488 kB' 'Writeback: 0 kB' 'AnonPages: 124932 kB' 'Mapped: 48996 kB' 'Shmem: 10464 kB' 'KReclaimable: 70240 kB' 'Slab: 147096 kB' 'SReclaimable: 70240 kB' 'SUnreclaim: 76856 kB' 'KernelStack: 6580 kB' 'PageTables: 4568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 348044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.780 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.780 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.781 16:17:15 -- setup/common.sh@33 -- # echo 0 00:05:41.781 16:17:15 -- setup/common.sh@33 -- # return 0 00:05:41.781 16:17:15 -- setup/hugepages.sh@97 -- # anon=0 00:05:41.781 16:17:15 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:41.781 16:17:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:41.781 16:17:15 -- setup/common.sh@18 -- # local node= 00:05:41.781 16:17:15 -- setup/common.sh@19 -- # local var val 00:05:41.781 16:17:15 -- setup/common.sh@20 -- # local mem_f mem 00:05:41.781 16:17:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.781 16:17:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.781 16:17:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.781 16:17:15 -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.781 16:17:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.781 16:17:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8599256 kB' 'MemAvailable: 10524656 kB' 'Buffers: 2436 kB' 'Cached: 2135248 kB' 'SwapCached: 0 kB' 'Active: 892444 kB' 'Inactive: 1367872 kB' 'Active(anon): 133096 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1367872 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1488 kB' 'Writeback: 0 kB' 'AnonPages: 124192 kB' 'Mapped: 48936 kB' 'Shmem: 10464 kB' 'KReclaimable: 70240 kB' 'Slab: 147096 kB' 'SReclaimable: 70240 kB' 'SUnreclaim: 76856 kB' 'KernelStack: 6516 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 348044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.781 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.781 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.782 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.782 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.783 16:17:15 -- setup/common.sh@33 -- # echo 0 00:05:41.783 16:17:15 -- setup/common.sh@33 -- # return 0 00:05:41.783 16:17:15 -- setup/hugepages.sh@99 -- # surp=0 00:05:41.783 16:17:15 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:41.783 16:17:15 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:41.783 16:17:15 -- setup/common.sh@18 -- # local node= 00:05:41.783 16:17:15 -- setup/common.sh@19 -- # local var val 00:05:41.783 16:17:15 -- setup/common.sh@20 -- # local mem_f mem 00:05:41.783 16:17:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.783 16:17:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.783 16:17:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.783 16:17:15 -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.783 16:17:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.783 16:17:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8599628 kB' 'MemAvailable: 10525028 kB' 'Buffers: 2436 kB' 'Cached: 2135248 kB' 'SwapCached: 0 kB' 'Active: 892248 kB' 'Inactive: 1367872 kB' 'Active(anon): 132900 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1367872 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1488 kB' 'Writeback: 0 kB' 'AnonPages: 124004 kB' 'Mapped: 48868 kB' 'Shmem: 10464 kB' 'KReclaimable: 70240 kB' 'Slab: 147092 kB' 'SReclaimable: 70240 kB' 'SUnreclaim: 76852 kB' 'KernelStack: 6560 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 348044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.783 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.783 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.784 16:17:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.784 16:17:15 -- setup/common.sh@33 -- # echo 0 00:05:41.784 16:17:15 -- setup/common.sh@33 -- # return 0 00:05:41.784 16:17:15 -- setup/hugepages.sh@100 -- # resv=0 00:05:41.784 nr_hugepages=512 00:05:41.784 16:17:15 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:41.784 resv_hugepages=0 00:05:41.784 16:17:15 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:41.784 surplus_hugepages=0 00:05:41.784 16:17:15 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:41.784 anon_hugepages=0 00:05:41.784 16:17:15 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:41.784 16:17:15 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:41.784 16:17:15 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:41.784 16:17:15 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:41.784 16:17:15 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:41.784 16:17:15 -- setup/common.sh@18 -- # local node= 00:05:41.784 16:17:15 -- setup/common.sh@19 -- # local var val 00:05:41.784 16:17:15 -- setup/common.sh@20 -- # local mem_f mem 00:05:41.784 16:17:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.784 16:17:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.784 16:17:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.784 16:17:15 -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.784 16:17:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.784 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.785 16:17:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8599628 kB' 'MemAvailable: 10525028 kB' 'Buffers: 2436 kB' 'Cached: 2135248 kB' 'SwapCached: 0 kB' 'Active: 892248 kB' 'Inactive: 1367872 kB' 'Active(anon): 132900 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1367872 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1488 kB' 'Writeback: 0 kB' 'AnonPages: 124004 kB' 'Mapped: 48868 kB' 'Shmem: 10464 kB' 'KReclaimable: 70240 kB' 'Slab: 147092 kB' 'SReclaimable: 70240 kB' 'SUnreclaim: 76852 kB' 'KernelStack: 6560 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 348044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:41.785 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.785 16:17:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.785 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.785 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.785 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.785 16:17:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.785 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.785 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.785 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.785 16:17:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.785 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.785 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.785 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.785 16:17:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.785 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.785 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.785 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.785 16:17:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.785 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.785 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.785 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.785 16:17:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.785 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.785 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.785 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.785 16:17:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.785 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.785 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.785 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.785 16:17:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.785 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.785 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.785 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.785 16:17:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.785 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.785 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.785 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.785 16:17:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.785 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.785 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.785 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.785 16:17:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.785 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.785 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.785 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.785 16:17:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.785 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.785 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.785 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.785 16:17:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.785 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.785 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.785 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.785 16:17:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.785 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.785 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.785 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.785 16:17:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.785 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.785 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.785 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.785 16:17:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.785 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.785 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.785 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.785 16:17:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.785 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.785 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.785 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.785 16:17:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.785 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.785 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.785 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.785 16:17:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.785 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.785 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.785 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.785 16:17:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.785 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.785 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.785 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.785 16:17:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.785 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.785 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.785 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.785 16:17:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.785 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.785 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.785 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.786 16:17:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.786 16:17:15 -- setup/common.sh@33 -- # echo 512 00:05:41.786 16:17:15 -- setup/common.sh@33 -- # return 0 00:05:41.786 16:17:15 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:41.786 16:17:15 -- setup/hugepages.sh@112 -- # get_nodes 00:05:41.786 16:17:15 -- setup/hugepages.sh@27 -- # local node 00:05:41.786 16:17:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:41.786 16:17:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:41.786 16:17:15 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:41.786 16:17:15 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:41.786 16:17:15 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:41.786 16:17:15 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:41.786 16:17:15 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:41.786 16:17:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:41.786 16:17:15 -- setup/common.sh@18 -- # local node=0 00:05:41.786 16:17:15 -- setup/common.sh@19 -- # local var val 00:05:41.786 16:17:15 -- setup/common.sh@20 -- # local mem_f mem 00:05:41.786 16:17:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.786 16:17:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:41.786 16:17:15 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:41.786 16:17:15 -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.786 16:17:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.786 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.787 16:17:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8599540 kB' 'MemUsed: 3642436 kB' 'SwapCached: 0 kB' 'Active: 892564 kB' 'Inactive: 1367872 kB' 'Active(anon): 133216 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1367872 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1488 kB' 'Writeback: 0 kB' 'FilePages: 2137684 kB' 'Mapped: 48868 kB' 'AnonPages: 124324 kB' 'Shmem: 10464 kB' 'KernelStack: 6560 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70240 kB' 'Slab: 147092 kB' 'SReclaimable: 70240 kB' 'SUnreclaim: 76852 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.787 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.787 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.788 16:17:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.788 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.788 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.788 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.788 16:17:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.788 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.788 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.788 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.788 16:17:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.788 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.788 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.788 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.788 16:17:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.788 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.788 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.788 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.788 16:17:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.788 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.788 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.788 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.788 16:17:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.788 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.788 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.788 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.788 16:17:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.788 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.788 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.788 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.788 16:17:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.788 16:17:15 -- setup/common.sh@32 -- # continue 00:05:41.788 16:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.788 16:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.788 16:17:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.788 16:17:15 -- setup/common.sh@33 -- # echo 0 00:05:41.788 16:17:15 -- setup/common.sh@33 -- # return 0 00:05:41.788 16:17:15 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:41.788 16:17:15 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:41.788 16:17:15 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:41.788 16:17:15 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:41.788 node0=512 expecting 512 00:05:41.788 16:17:15 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:41.788 16:17:15 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:41.788 00:05:41.788 real 0m0.498s 00:05:41.788 user 0m0.239s 00:05:41.788 sys 0m0.291s 00:05:41.788 16:17:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:41.788 16:17:15 -- common/autotest_common.sh@10 -- # set +x 00:05:41.788 ************************************ 00:05:41.788 END TEST custom_alloc 00:05:41.788 ************************************ 00:05:41.788 16:17:15 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:41.788 16:17:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:41.788 16:17:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.788 16:17:15 -- common/autotest_common.sh@10 -- # set +x 00:05:42.046 ************************************ 00:05:42.046 START TEST no_shrink_alloc 00:05:42.046 ************************************ 00:05:42.046 16:17:15 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:05:42.046 16:17:15 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:42.046 16:17:15 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:42.046 16:17:15 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:42.046 16:17:15 -- setup/hugepages.sh@51 -- # shift 00:05:42.046 16:17:15 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:42.046 16:17:15 -- setup/hugepages.sh@52 -- # local node_ids 00:05:42.046 16:17:15 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:42.047 16:17:15 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:42.047 16:17:15 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:42.047 16:17:15 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:42.047 16:17:15 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:42.047 16:17:15 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:42.047 16:17:15 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:42.047 16:17:15 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:42.047 16:17:15 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:42.047 16:17:15 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:42.047 16:17:15 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:42.047 16:17:15 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:42.047 16:17:15 -- setup/hugepages.sh@73 -- # return 0 00:05:42.047 16:17:15 -- setup/hugepages.sh@198 -- # setup output 00:05:42.047 16:17:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:42.047 16:17:15 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:42.309 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:42.309 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:42.309 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:42.309 16:17:16 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:42.309 16:17:16 -- setup/hugepages.sh@89 -- # local node 00:05:42.309 16:17:16 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:42.309 16:17:16 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:42.309 16:17:16 -- setup/hugepages.sh@92 -- # local surp 00:05:42.309 16:17:16 -- setup/hugepages.sh@93 -- # local resv 00:05:42.309 16:17:16 -- setup/hugepages.sh@94 -- # local anon 00:05:42.309 16:17:16 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:42.309 16:17:16 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:42.309 16:17:16 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:42.309 16:17:16 -- setup/common.sh@18 -- # local node= 00:05:42.309 16:17:16 -- setup/common.sh@19 -- # local var val 00:05:42.309 16:17:16 -- setup/common.sh@20 -- # local mem_f mem 00:05:42.309 16:17:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:42.309 16:17:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:42.309 16:17:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:42.309 16:17:16 -- setup/common.sh@28 -- # mapfile -t mem 00:05:42.309 16:17:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:42.309 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.309 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.309 16:17:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7555320 kB' 'MemAvailable: 9480720 kB' 'Buffers: 2436 kB' 'Cached: 2135252 kB' 'SwapCached: 0 kB' 'Active: 887816 kB' 'Inactive: 1367876 kB' 'Active(anon): 128468 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1367876 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1632 kB' 'Writeback: 0 kB' 'AnonPages: 119584 kB' 'Mapped: 48320 kB' 'Shmem: 10464 kB' 'KReclaimable: 70232 kB' 'Slab: 147008 kB' 'SReclaimable: 70232 kB' 'SUnreclaim: 76776 kB' 'KernelStack: 6452 kB' 'PageTables: 3948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 329612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:42.309 16:17:16 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.309 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.309 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.309 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.309 16:17:16 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.309 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.309 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.309 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.309 16:17:16 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.309 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.309 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.309 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.309 16:17:16 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.309 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.309 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.309 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.309 16:17:16 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.309 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.309 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.309 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.309 16:17:16 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.309 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.309 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.309 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.309 16:17:16 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.309 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.309 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.309 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.309 16:17:16 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.309 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.309 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.309 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.309 16:17:16 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.309 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.309 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.309 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.309 16:17:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.309 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.309 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.309 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.309 16:17:16 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.309 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.309 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.309 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.309 16:17:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.309 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.309 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.309 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.309 16:17:16 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.309 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.309 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.309 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.310 16:17:16 -- setup/common.sh@33 -- # echo 0 00:05:42.310 16:17:16 -- setup/common.sh@33 -- # return 0 00:05:42.310 16:17:16 -- setup/hugepages.sh@97 -- # anon=0 00:05:42.310 16:17:16 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:42.310 16:17:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:42.310 16:17:16 -- setup/common.sh@18 -- # local node= 00:05:42.310 16:17:16 -- setup/common.sh@19 -- # local var val 00:05:42.310 16:17:16 -- setup/common.sh@20 -- # local mem_f mem 00:05:42.310 16:17:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:42.310 16:17:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:42.310 16:17:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:42.310 16:17:16 -- setup/common.sh@28 -- # mapfile -t mem 00:05:42.310 16:17:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.310 16:17:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7555304 kB' 'MemAvailable: 9480704 kB' 'Buffers: 2436 kB' 'Cached: 2135252 kB' 'SwapCached: 0 kB' 'Active: 887572 kB' 'Inactive: 1367876 kB' 'Active(anon): 128224 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1367876 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1632 kB' 'Writeback: 0 kB' 'AnonPages: 119588 kB' 'Mapped: 48260 kB' 'Shmem: 10464 kB' 'KReclaimable: 70232 kB' 'Slab: 147008 kB' 'SReclaimable: 70232 kB' 'SUnreclaim: 76776 kB' 'KernelStack: 6404 kB' 'PageTables: 3828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 329612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.310 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.310 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.311 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.311 16:17:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.312 16:17:16 -- setup/common.sh@33 -- # echo 0 00:05:42.312 16:17:16 -- setup/common.sh@33 -- # return 0 00:05:42.312 16:17:16 -- setup/hugepages.sh@99 -- # surp=0 00:05:42.312 16:17:16 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:42.312 16:17:16 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:42.312 16:17:16 -- setup/common.sh@18 -- # local node= 00:05:42.312 16:17:16 -- setup/common.sh@19 -- # local var val 00:05:42.312 16:17:16 -- setup/common.sh@20 -- # local mem_f mem 00:05:42.312 16:17:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:42.312 16:17:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:42.312 16:17:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:42.312 16:17:16 -- setup/common.sh@28 -- # mapfile -t mem 00:05:42.312 16:17:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.312 16:17:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7555304 kB' 'MemAvailable: 9480704 kB' 'Buffers: 2436 kB' 'Cached: 2135252 kB' 'SwapCached: 0 kB' 'Active: 887372 kB' 'Inactive: 1367876 kB' 'Active(anon): 128024 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1367876 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1632 kB' 'Writeback: 0 kB' 'AnonPages: 119372 kB' 'Mapped: 48136 kB' 'Shmem: 10464 kB' 'KReclaimable: 70232 kB' 'Slab: 147004 kB' 'SReclaimable: 70232 kB' 'SUnreclaim: 76772 kB' 'KernelStack: 6432 kB' 'PageTables: 3800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 329612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.312 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.312 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.313 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.313 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.314 16:17:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.314 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.314 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.314 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.314 16:17:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.314 16:17:16 -- setup/common.sh@33 -- # echo 0 00:05:42.314 16:17:16 -- setup/common.sh@33 -- # return 0 00:05:42.314 16:17:16 -- setup/hugepages.sh@100 -- # resv=0 00:05:42.314 nr_hugepages=1024 00:05:42.314 16:17:16 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:42.314 resv_hugepages=0 00:05:42.314 16:17:16 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:42.314 surplus_hugepages=0 00:05:42.314 16:17:16 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:42.314 anon_hugepages=0 00:05:42.314 16:17:16 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:42.314 16:17:16 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:42.314 16:17:16 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:42.314 16:17:16 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:42.314 16:17:16 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:42.314 16:17:16 -- setup/common.sh@18 -- # local node= 00:05:42.314 16:17:16 -- setup/common.sh@19 -- # local var val 00:05:42.314 16:17:16 -- setup/common.sh@20 -- # local mem_f mem 00:05:42.314 16:17:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:42.314 16:17:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:42.314 16:17:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:42.314 16:17:16 -- setup/common.sh@28 -- # mapfile -t mem 00:05:42.314 16:17:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:42.314 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.314 16:17:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7555304 kB' 'MemAvailable: 9480704 kB' 'Buffers: 2436 kB' 'Cached: 2135252 kB' 'SwapCached: 0 kB' 'Active: 887336 kB' 'Inactive: 1367876 kB' 'Active(anon): 127988 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1367876 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1632 kB' 'Writeback: 0 kB' 'AnonPages: 119372 kB' 'Mapped: 48136 kB' 'Shmem: 10464 kB' 'KReclaimable: 70232 kB' 'Slab: 147000 kB' 'SReclaimable: 70232 kB' 'SUnreclaim: 76768 kB' 'KernelStack: 6464 kB' 'PageTables: 3900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 329612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:42.314 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.314 16:17:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.314 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.314 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.314 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.314 16:17:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.314 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.314 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.314 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.314 16:17:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.314 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.314 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.314 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.314 16:17:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.314 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.314 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.314 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.314 16:17:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.314 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.314 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.314 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.314 16:17:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.314 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.314 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.314 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.314 16:17:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.314 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.314 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.314 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.314 16:17:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.314 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.314 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.314 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.314 16:17:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.314 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.314 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.314 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.314 16:17:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.314 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.314 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.314 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.314 16:17:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.314 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.314 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.314 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.314 16:17:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.314 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.314 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.314 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.314 16:17:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.314 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.314 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.314 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.314 16:17:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.314 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.314 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.314 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.314 16:17:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.314 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.314 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.314 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.314 16:17:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.314 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.314 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.314 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.314 16:17:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.314 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.314 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.314 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.314 16:17:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.314 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.314 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.314 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.315 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.315 16:17:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.315 16:17:16 -- setup/common.sh@33 -- # echo 1024 00:05:42.315 16:17:16 -- setup/common.sh@33 -- # return 0 00:05:42.315 16:17:16 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:42.315 16:17:16 -- setup/hugepages.sh@112 -- # get_nodes 00:05:42.315 16:17:16 -- setup/hugepages.sh@27 -- # local node 00:05:42.315 16:17:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:42.315 16:17:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:42.315 16:17:16 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:42.315 16:17:16 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:42.315 16:17:16 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:42.315 16:17:16 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:42.315 16:17:16 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:42.315 16:17:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:42.315 16:17:16 -- setup/common.sh@18 -- # local node=0 00:05:42.315 16:17:16 -- setup/common.sh@19 -- # local var val 00:05:42.315 16:17:16 -- setup/common.sh@20 -- # local mem_f mem 00:05:42.315 16:17:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:42.315 16:17:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:42.315 16:17:16 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:42.315 16:17:16 -- setup/common.sh@28 -- # mapfile -t mem 00:05:42.316 16:17:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.316 16:17:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7555304 kB' 'MemUsed: 4686672 kB' 'SwapCached: 0 kB' 'Active: 887312 kB' 'Inactive: 1367876 kB' 'Active(anon): 127964 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1367876 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1632 kB' 'Writeback: 0 kB' 'FilePages: 2137688 kB' 'Mapped: 48136 kB' 'AnonPages: 119336 kB' 'Shmem: 10464 kB' 'KernelStack: 6448 kB' 'PageTables: 3844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70232 kB' 'Slab: 147000 kB' 'SReclaimable: 70232 kB' 'SUnreclaim: 76768 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.316 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.316 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.317 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.317 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.317 16:17:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.317 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.317 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.317 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.317 16:17:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.317 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.317 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.317 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.317 16:17:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.317 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.317 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.317 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.317 16:17:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.317 16:17:16 -- setup/common.sh@33 -- # echo 0 00:05:42.317 16:17:16 -- setup/common.sh@33 -- # return 0 00:05:42.575 16:17:16 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:42.575 16:17:16 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:42.575 16:17:16 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:42.575 16:17:16 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:42.575 node0=1024 expecting 1024 00:05:42.575 16:17:16 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:42.575 16:17:16 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:42.575 16:17:16 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:42.575 16:17:16 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:42.575 16:17:16 -- setup/hugepages.sh@202 -- # setup output 00:05:42.575 16:17:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:42.575 16:17:16 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:42.837 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:42.837 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:42.837 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:42.837 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:42.837 16:17:16 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:42.837 16:17:16 -- setup/hugepages.sh@89 -- # local node 00:05:42.837 16:17:16 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:42.837 16:17:16 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:42.837 16:17:16 -- setup/hugepages.sh@92 -- # local surp 00:05:42.837 16:17:16 -- setup/hugepages.sh@93 -- # local resv 00:05:42.837 16:17:16 -- setup/hugepages.sh@94 -- # local anon 00:05:42.837 16:17:16 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:42.837 16:17:16 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:42.837 16:17:16 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:42.837 16:17:16 -- setup/common.sh@18 -- # local node= 00:05:42.837 16:17:16 -- setup/common.sh@19 -- # local var val 00:05:42.837 16:17:16 -- setup/common.sh@20 -- # local mem_f mem 00:05:42.837 16:17:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:42.837 16:17:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:42.837 16:17:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:42.837 16:17:16 -- setup/common.sh@28 -- # mapfile -t mem 00:05:42.837 16:17:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.837 16:17:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7555576 kB' 'MemAvailable: 9480976 kB' 'Buffers: 2436 kB' 'Cached: 2135252 kB' 'SwapCached: 0 kB' 'Active: 887944 kB' 'Inactive: 1367876 kB' 'Active(anon): 128596 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1367876 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1632 kB' 'Writeback: 0 kB' 'AnonPages: 119720 kB' 'Mapped: 48256 kB' 'Shmem: 10464 kB' 'KReclaimable: 70232 kB' 'Slab: 146992 kB' 'SReclaimable: 70232 kB' 'SUnreclaim: 76760 kB' 'KernelStack: 6500 kB' 'PageTables: 3820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 329612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.837 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.837 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.838 16:17:16 -- setup/common.sh@33 -- # echo 0 00:05:42.838 16:17:16 -- setup/common.sh@33 -- # return 0 00:05:42.838 16:17:16 -- setup/hugepages.sh@97 -- # anon=0 00:05:42.838 16:17:16 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:42.838 16:17:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:42.838 16:17:16 -- setup/common.sh@18 -- # local node= 00:05:42.838 16:17:16 -- setup/common.sh@19 -- # local var val 00:05:42.838 16:17:16 -- setup/common.sh@20 -- # local mem_f mem 00:05:42.838 16:17:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:42.838 16:17:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:42.838 16:17:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:42.838 16:17:16 -- setup/common.sh@28 -- # mapfile -t mem 00:05:42.838 16:17:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.838 16:17:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7555828 kB' 'MemAvailable: 9481228 kB' 'Buffers: 2436 kB' 'Cached: 2135252 kB' 'SwapCached: 0 kB' 'Active: 887312 kB' 'Inactive: 1367876 kB' 'Active(anon): 127964 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1367876 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 119332 kB' 'Mapped: 48136 kB' 'Shmem: 10464 kB' 'KReclaimable: 70232 kB' 'Slab: 146984 kB' 'SReclaimable: 70232 kB' 'SUnreclaim: 76752 kB' 'KernelStack: 6448 kB' 'PageTables: 3848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 329612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.838 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.838 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.839 16:17:16 -- setup/common.sh@33 -- # echo 0 00:05:42.839 16:17:16 -- setup/common.sh@33 -- # return 0 00:05:42.839 16:17:16 -- setup/hugepages.sh@99 -- # surp=0 00:05:42.839 16:17:16 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:42.839 16:17:16 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:42.839 16:17:16 -- setup/common.sh@18 -- # local node= 00:05:42.839 16:17:16 -- setup/common.sh@19 -- # local var val 00:05:42.839 16:17:16 -- setup/common.sh@20 -- # local mem_f mem 00:05:42.839 16:17:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:42.839 16:17:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:42.839 16:17:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:42.839 16:17:16 -- setup/common.sh@28 -- # mapfile -t mem 00:05:42.839 16:17:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.839 16:17:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7555828 kB' 'MemAvailable: 9481228 kB' 'Buffers: 2436 kB' 'Cached: 2135252 kB' 'SwapCached: 0 kB' 'Active: 887564 kB' 'Inactive: 1367876 kB' 'Active(anon): 128216 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1367876 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 119360 kB' 'Mapped: 48136 kB' 'Shmem: 10464 kB' 'KReclaimable: 70232 kB' 'Slab: 146984 kB' 'SReclaimable: 70232 kB' 'SUnreclaim: 76752 kB' 'KernelStack: 6464 kB' 'PageTables: 3908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 329612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.839 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.839 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.840 16:17:16 -- setup/common.sh@33 -- # echo 0 00:05:42.840 16:17:16 -- setup/common.sh@33 -- # return 0 00:05:42.840 16:17:16 -- setup/hugepages.sh@100 -- # resv=0 00:05:42.840 16:17:16 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:42.840 nr_hugepages=1024 00:05:42.840 resv_hugepages=0 00:05:42.840 16:17:16 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:42.840 surplus_hugepages=0 00:05:42.840 16:17:16 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:42.840 anon_hugepages=0 00:05:42.840 16:17:16 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:42.840 16:17:16 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:42.840 16:17:16 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:42.840 16:17:16 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:42.840 16:17:16 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:42.840 16:17:16 -- setup/common.sh@18 -- # local node= 00:05:42.840 16:17:16 -- setup/common.sh@19 -- # local var val 00:05:42.840 16:17:16 -- setup/common.sh@20 -- # local mem_f mem 00:05:42.840 16:17:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:42.840 16:17:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:42.840 16:17:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:42.840 16:17:16 -- setup/common.sh@28 -- # mapfile -t mem 00:05:42.840 16:17:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.840 16:17:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7556444 kB' 'MemAvailable: 9481844 kB' 'Buffers: 2436 kB' 'Cached: 2135252 kB' 'SwapCached: 0 kB' 'Active: 887336 kB' 'Inactive: 1367876 kB' 'Active(anon): 127988 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1367876 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 119368 kB' 'Mapped: 48136 kB' 'Shmem: 10464 kB' 'KReclaimable: 70232 kB' 'Slab: 146984 kB' 'SReclaimable: 70232 kB' 'SUnreclaim: 76752 kB' 'KernelStack: 6464 kB' 'PageTables: 3904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 329612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 6129664 kB' 'DirectMap1G: 8388608 kB' 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.840 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.840 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # continue 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 16:17:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.841 16:17:16 -- setup/common.sh@33 -- # echo 1024 00:05:42.841 16:17:16 -- setup/common.sh@33 -- # return 0 00:05:43.101 16:17:16 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:43.101 16:17:16 -- setup/hugepages.sh@112 -- # get_nodes 00:05:43.101 16:17:16 -- setup/hugepages.sh@27 -- # local node 00:05:43.101 16:17:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:43.101 16:17:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:43.101 16:17:16 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:43.101 16:17:16 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:43.101 16:17:16 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:43.101 16:17:16 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:43.101 16:17:16 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:43.101 16:17:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:43.101 16:17:16 -- setup/common.sh@18 -- # local node=0 00:05:43.101 16:17:16 -- setup/common.sh@19 -- # local var val 00:05:43.101 16:17:16 -- setup/common.sh@20 -- # local mem_f mem 00:05:43.101 16:17:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.101 16:17:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:43.101 16:17:16 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:43.101 16:17:16 -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.101 16:17:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.101 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.101 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.101 16:17:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7556444 kB' 'MemUsed: 4685532 kB' 'SwapCached: 0 kB' 'Active: 887548 kB' 'Inactive: 1367876 kB' 'Active(anon): 128200 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1367876 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'FilePages: 2137688 kB' 'Mapped: 48136 kB' 'AnonPages: 119372 kB' 'Shmem: 10464 kB' 'KernelStack: 6464 kB' 'PageTables: 3908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70232 kB' 'Slab: 146984 kB' 'SReclaimable: 70232 kB' 'SUnreclaim: 76752 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:43.101 16:17:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.101 16:17:16 -- setup/common.sh@32 -- # continue 00:05:43.101 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.101 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.101 16:17:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.101 16:17:16 -- setup/common.sh@32 -- # continue 00:05:43.101 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.101 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.101 16:17:16 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.101 16:17:16 -- setup/common.sh@32 -- # continue 00:05:43.101 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.101 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.101 16:17:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.101 16:17:16 -- setup/common.sh@32 -- # continue 00:05:43.101 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.101 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.101 16:17:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.101 16:17:16 -- setup/common.sh@32 -- # continue 00:05:43.101 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.101 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.101 16:17:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # continue 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # continue 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # continue 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # continue 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # continue 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # continue 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # continue 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # continue 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # continue 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # continue 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # continue 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # continue 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # continue 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # continue 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # continue 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # continue 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # continue 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # continue 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # continue 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # continue 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # continue 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # continue 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # continue 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # continue 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.102 16:17:16 -- setup/common.sh@32 -- # continue 00:05:43.102 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.103 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.103 16:17:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.103 16:17:16 -- setup/common.sh@32 -- # continue 00:05:43.103 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.103 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.103 16:17:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.103 16:17:16 -- setup/common.sh@32 -- # continue 00:05:43.103 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.103 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.103 16:17:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.103 16:17:16 -- setup/common.sh@32 -- # continue 00:05:43.103 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.103 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.103 16:17:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.103 16:17:16 -- setup/common.sh@32 -- # continue 00:05:43.103 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.103 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.103 16:17:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.103 16:17:16 -- setup/common.sh@32 -- # continue 00:05:43.103 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.103 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.103 16:17:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.103 16:17:16 -- setup/common.sh@32 -- # continue 00:05:43.103 16:17:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.103 16:17:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.103 16:17:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.103 16:17:16 -- setup/common.sh@33 -- # echo 0 00:05:43.103 16:17:16 -- setup/common.sh@33 -- # return 0 00:05:43.103 16:17:16 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:43.103 16:17:16 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:43.103 16:17:16 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:43.103 16:17:16 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:43.103 node0=1024 expecting 1024 00:05:43.103 16:17:16 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:43.103 16:17:16 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:43.103 00:05:43.103 real 0m1.066s 00:05:43.103 user 0m0.515s 00:05:43.103 sys 0m0.581s 00:05:43.103 16:17:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:43.103 16:17:16 -- common/autotest_common.sh@10 -- # set +x 00:05:43.103 ************************************ 00:05:43.103 END TEST no_shrink_alloc 00:05:43.103 ************************************ 00:05:43.103 16:17:16 -- setup/hugepages.sh@217 -- # clear_hp 00:05:43.103 16:17:16 -- setup/hugepages.sh@37 -- # local node hp 00:05:43.103 16:17:16 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:43.103 16:17:16 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:43.103 16:17:16 -- setup/hugepages.sh@41 -- # echo 0 00:05:43.103 16:17:16 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:43.103 16:17:16 -- setup/hugepages.sh@41 -- # echo 0 00:05:43.103 16:17:16 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:43.103 16:17:16 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:43.103 00:05:43.103 real 0m5.026s 00:05:43.103 user 0m2.328s 00:05:43.103 sys 0m2.718s 00:05:43.103 16:17:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:43.103 16:17:16 -- common/autotest_common.sh@10 -- # set +x 00:05:43.103 ************************************ 00:05:43.103 END TEST hugepages 00:05:43.103 ************************************ 00:05:43.103 16:17:16 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:43.103 16:17:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:43.103 16:17:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:43.103 16:17:16 -- common/autotest_common.sh@10 -- # set +x 00:05:43.103 ************************************ 00:05:43.103 START TEST driver 00:05:43.103 ************************************ 00:05:43.103 16:17:17 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:43.362 * Looking for test storage... 00:05:43.362 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:43.362 16:17:17 -- setup/driver.sh@68 -- # setup reset 00:05:43.362 16:17:17 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:43.362 16:17:17 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:43.929 16:17:17 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:43.929 16:17:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:43.929 16:17:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:43.929 16:17:17 -- common/autotest_common.sh@10 -- # set +x 00:05:43.929 ************************************ 00:05:43.929 START TEST guess_driver 00:05:43.929 ************************************ 00:05:43.929 16:17:17 -- common/autotest_common.sh@1111 -- # guess_driver 00:05:43.929 16:17:17 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:43.929 16:17:17 -- setup/driver.sh@47 -- # local fail=0 00:05:43.929 16:17:17 -- setup/driver.sh@49 -- # pick_driver 00:05:43.929 16:17:17 -- setup/driver.sh@36 -- # vfio 00:05:43.929 16:17:17 -- setup/driver.sh@21 -- # local iommu_grups 00:05:43.929 16:17:17 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:43.929 16:17:17 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:43.929 16:17:17 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:43.929 16:17:17 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:43.929 16:17:17 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:43.929 16:17:17 -- setup/driver.sh@32 -- # return 1 00:05:43.929 16:17:17 -- setup/driver.sh@38 -- # uio 00:05:43.929 16:17:17 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:43.929 16:17:17 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:43.929 16:17:17 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:43.929 16:17:17 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:43.929 16:17:17 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:43.929 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:43.929 16:17:17 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:43.929 16:17:17 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:43.929 16:17:17 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:43.929 Looking for driver=uio_pci_generic 00:05:43.929 16:17:17 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:43.929 16:17:17 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:43.929 16:17:17 -- setup/driver.sh@45 -- # setup output config 00:05:43.929 16:17:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:43.929 16:17:17 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:44.497 16:17:18 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:44.497 16:17:18 -- setup/driver.sh@58 -- # continue 00:05:44.497 16:17:18 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:44.762 16:17:18 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:44.762 16:17:18 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:44.762 16:17:18 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:44.762 16:17:18 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:44.762 16:17:18 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:44.762 16:17:18 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:44.762 16:17:18 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:44.762 16:17:18 -- setup/driver.sh@65 -- # setup reset 00:05:44.762 16:17:18 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:44.762 16:17:18 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:45.328 00:05:45.328 real 0m1.431s 00:05:45.328 user 0m0.538s 00:05:45.328 sys 0m0.871s 00:05:45.328 16:17:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:45.328 16:17:19 -- common/autotest_common.sh@10 -- # set +x 00:05:45.328 ************************************ 00:05:45.328 END TEST guess_driver 00:05:45.328 ************************************ 00:05:45.328 00:05:45.328 real 0m2.210s 00:05:45.328 user 0m0.798s 00:05:45.328 sys 0m1.425s 00:05:45.328 16:17:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:45.328 ************************************ 00:05:45.328 END TEST driver 00:05:45.328 ************************************ 00:05:45.328 16:17:19 -- common/autotest_common.sh@10 -- # set +x 00:05:45.328 16:17:19 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:45.328 16:17:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:45.328 16:17:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.328 16:17:19 -- common/autotest_common.sh@10 -- # set +x 00:05:45.586 ************************************ 00:05:45.586 START TEST devices 00:05:45.586 ************************************ 00:05:45.586 16:17:19 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:45.586 * Looking for test storage... 00:05:45.586 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:45.586 16:17:19 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:45.586 16:17:19 -- setup/devices.sh@192 -- # setup reset 00:05:45.586 16:17:19 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:45.586 16:17:19 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:46.519 16:17:20 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:46.519 16:17:20 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:46.519 16:17:20 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:46.519 16:17:20 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:46.519 16:17:20 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:46.519 16:17:20 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:46.519 16:17:20 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:46.519 16:17:20 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:46.519 16:17:20 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:46.519 16:17:20 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:46.519 16:17:20 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n2 00:05:46.519 16:17:20 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:05:46.519 16:17:20 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:05:46.519 16:17:20 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:46.519 16:17:20 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:46.519 16:17:20 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n3 00:05:46.519 16:17:20 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:05:46.519 16:17:20 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:05:46.519 16:17:20 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:46.519 16:17:20 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:46.519 16:17:20 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:46.519 16:17:20 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:46.519 16:17:20 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:46.519 16:17:20 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:46.519 16:17:20 -- setup/devices.sh@196 -- # blocks=() 00:05:46.519 16:17:20 -- setup/devices.sh@196 -- # declare -a blocks 00:05:46.519 16:17:20 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:46.519 16:17:20 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:46.519 16:17:20 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:46.519 16:17:20 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:46.519 16:17:20 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:46.520 16:17:20 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:46.520 16:17:20 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:46.520 16:17:20 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:46.520 16:17:20 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:46.520 16:17:20 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:46.520 16:17:20 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:46.520 No valid GPT data, bailing 00:05:46.520 16:17:20 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:46.520 16:17:20 -- scripts/common.sh@391 -- # pt= 00:05:46.520 16:17:20 -- scripts/common.sh@392 -- # return 1 00:05:46.520 16:17:20 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:46.520 16:17:20 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:46.520 16:17:20 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:46.520 16:17:20 -- setup/common.sh@80 -- # echo 4294967296 00:05:46.520 16:17:20 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:46.520 16:17:20 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:46.520 16:17:20 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:46.520 16:17:20 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:46.520 16:17:20 -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:05:46.520 16:17:20 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:46.520 16:17:20 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:46.520 16:17:20 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:46.520 16:17:20 -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:05:46.520 16:17:20 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:05:46.520 16:17:20 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:05:46.520 No valid GPT data, bailing 00:05:46.520 16:17:20 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:05:46.520 16:17:20 -- scripts/common.sh@391 -- # pt= 00:05:46.520 16:17:20 -- scripts/common.sh@392 -- # return 1 00:05:46.520 16:17:20 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:05:46.520 16:17:20 -- setup/common.sh@76 -- # local dev=nvme0n2 00:05:46.520 16:17:20 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:05:46.520 16:17:20 -- setup/common.sh@80 -- # echo 4294967296 00:05:46.520 16:17:20 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:46.520 16:17:20 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:46.520 16:17:20 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:46.520 16:17:20 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:46.520 16:17:20 -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:05:46.520 16:17:20 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:46.520 16:17:20 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:46.520 16:17:20 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:46.520 16:17:20 -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:05:46.520 16:17:20 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:05:46.520 16:17:20 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:05:46.520 No valid GPT data, bailing 00:05:46.520 16:17:20 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:05:46.520 16:17:20 -- scripts/common.sh@391 -- # pt= 00:05:46.520 16:17:20 -- scripts/common.sh@392 -- # return 1 00:05:46.520 16:17:20 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:05:46.520 16:17:20 -- setup/common.sh@76 -- # local dev=nvme0n3 00:05:46.520 16:17:20 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:05:46.520 16:17:20 -- setup/common.sh@80 -- # echo 4294967296 00:05:46.520 16:17:20 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:46.520 16:17:20 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:46.520 16:17:20 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:46.520 16:17:20 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:46.520 16:17:20 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:46.520 16:17:20 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:46.520 16:17:20 -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:05:46.520 16:17:20 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:46.520 16:17:20 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:46.520 16:17:20 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:05:46.520 16:17:20 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:46.520 No valid GPT data, bailing 00:05:46.520 16:17:20 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:46.520 16:17:20 -- scripts/common.sh@391 -- # pt= 00:05:46.520 16:17:20 -- scripts/common.sh@392 -- # return 1 00:05:46.520 16:17:20 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:46.520 16:17:20 -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:46.520 16:17:20 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:46.520 16:17:20 -- setup/common.sh@80 -- # echo 5368709120 00:05:46.520 16:17:20 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:46.520 16:17:20 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:46.520 16:17:20 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:05:46.520 16:17:20 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:05:46.520 16:17:20 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:46.520 16:17:20 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:46.520 16:17:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:46.520 16:17:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:46.520 16:17:20 -- common/autotest_common.sh@10 -- # set +x 00:05:46.778 ************************************ 00:05:46.778 START TEST nvme_mount 00:05:46.778 ************************************ 00:05:46.778 16:17:20 -- common/autotest_common.sh@1111 -- # nvme_mount 00:05:46.779 16:17:20 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:46.779 16:17:20 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:46.779 16:17:20 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:46.779 16:17:20 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:46.779 16:17:20 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:46.779 16:17:20 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:46.779 16:17:20 -- setup/common.sh@40 -- # local part_no=1 00:05:46.779 16:17:20 -- setup/common.sh@41 -- # local size=1073741824 00:05:46.779 16:17:20 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:46.779 16:17:20 -- setup/common.sh@44 -- # parts=() 00:05:46.779 16:17:20 -- setup/common.sh@44 -- # local parts 00:05:46.779 16:17:20 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:46.779 16:17:20 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:46.779 16:17:20 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:46.779 16:17:20 -- setup/common.sh@46 -- # (( part++ )) 00:05:46.779 16:17:20 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:46.779 16:17:20 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:46.779 16:17:20 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:46.779 16:17:20 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:47.711 Creating new GPT entries in memory. 00:05:47.711 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:47.711 other utilities. 00:05:47.711 16:17:21 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:47.711 16:17:21 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:47.711 16:17:21 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:47.711 16:17:21 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:47.711 16:17:21 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:48.644 Creating new GPT entries in memory. 00:05:48.644 The operation has completed successfully. 00:05:48.644 16:17:22 -- setup/common.sh@57 -- # (( part++ )) 00:05:48.644 16:17:22 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:48.644 16:17:22 -- setup/common.sh@62 -- # wait 58508 00:05:48.644 16:17:22 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:48.644 16:17:22 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:48.644 16:17:22 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:48.644 16:17:22 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:48.644 16:17:22 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:48.902 16:17:22 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:48.902 16:17:22 -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:48.902 16:17:22 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:48.902 16:17:22 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:48.902 16:17:22 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:48.902 16:17:22 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:48.902 16:17:22 -- setup/devices.sh@53 -- # local found=0 00:05:48.902 16:17:22 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:48.902 16:17:22 -- setup/devices.sh@56 -- # : 00:05:48.902 16:17:22 -- setup/devices.sh@59 -- # local pci status 00:05:48.902 16:17:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.902 16:17:22 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:48.902 16:17:22 -- setup/devices.sh@47 -- # setup output config 00:05:48.902 16:17:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:48.902 16:17:22 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:48.902 16:17:22 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:48.902 16:17:22 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:48.902 16:17:22 -- setup/devices.sh@63 -- # found=1 00:05:48.902 16:17:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.902 16:17:22 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:48.902 16:17:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.160 16:17:23 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:49.160 16:17:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.160 16:17:23 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:49.160 16:17:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.419 16:17:23 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:49.419 16:17:23 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:49.419 16:17:23 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:49.419 16:17:23 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:49.419 16:17:23 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:49.419 16:17:23 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:49.419 16:17:23 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:49.419 16:17:23 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:49.419 16:17:23 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:49.419 16:17:23 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:49.419 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:49.419 16:17:23 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:49.419 16:17:23 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:49.678 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:49.678 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:49.678 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:49.678 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:49.678 16:17:23 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:49.678 16:17:23 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:49.678 16:17:23 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:49.678 16:17:23 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:49.678 16:17:23 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:49.678 16:17:23 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:49.678 16:17:23 -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:49.678 16:17:23 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:49.678 16:17:23 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:49.678 16:17:23 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:49.678 16:17:23 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:49.678 16:17:23 -- setup/devices.sh@53 -- # local found=0 00:05:49.678 16:17:23 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:49.678 16:17:23 -- setup/devices.sh@56 -- # : 00:05:49.678 16:17:23 -- setup/devices.sh@59 -- # local pci status 00:05:49.678 16:17:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.678 16:17:23 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:49.678 16:17:23 -- setup/devices.sh@47 -- # setup output config 00:05:49.678 16:17:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:49.678 16:17:23 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:49.937 16:17:23 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:49.937 16:17:23 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:49.937 16:17:23 -- setup/devices.sh@63 -- # found=1 00:05:49.937 16:17:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.937 16:17:23 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:49.937 16:17:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.937 16:17:23 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:49.937 16:17:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.195 16:17:23 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:50.195 16:17:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.195 16:17:24 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:50.195 16:17:24 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:50.195 16:17:24 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:50.195 16:17:24 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:50.195 16:17:24 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:50.195 16:17:24 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:50.195 16:17:24 -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:05:50.195 16:17:24 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:50.195 16:17:24 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:50.195 16:17:24 -- setup/devices.sh@50 -- # local mount_point= 00:05:50.195 16:17:24 -- setup/devices.sh@51 -- # local test_file= 00:05:50.195 16:17:24 -- setup/devices.sh@53 -- # local found=0 00:05:50.195 16:17:24 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:50.195 16:17:24 -- setup/devices.sh@59 -- # local pci status 00:05:50.195 16:17:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.195 16:17:24 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:50.195 16:17:24 -- setup/devices.sh@47 -- # setup output config 00:05:50.195 16:17:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:50.195 16:17:24 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:50.453 16:17:24 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:50.453 16:17:24 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:50.453 16:17:24 -- setup/devices.sh@63 -- # found=1 00:05:50.453 16:17:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.453 16:17:24 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:50.453 16:17:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.712 16:17:24 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:50.712 16:17:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.712 16:17:24 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:50.712 16:17:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.712 16:17:24 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:50.712 16:17:24 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:50.712 16:17:24 -- setup/devices.sh@68 -- # return 0 00:05:50.712 16:17:24 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:50.712 16:17:24 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:50.712 16:17:24 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:50.712 16:17:24 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:50.712 16:17:24 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:50.712 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:50.712 00:05:50.712 real 0m4.071s 00:05:50.712 user 0m0.736s 00:05:50.712 sys 0m1.061s 00:05:50.712 16:17:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:50.712 ************************************ 00:05:50.712 END TEST nvme_mount 00:05:50.712 16:17:24 -- common/autotest_common.sh@10 -- # set +x 00:05:50.712 ************************************ 00:05:50.712 16:17:24 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:50.712 16:17:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:50.712 16:17:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:50.712 16:17:24 -- common/autotest_common.sh@10 -- # set +x 00:05:50.970 ************************************ 00:05:50.971 START TEST dm_mount 00:05:50.971 ************************************ 00:05:50.971 16:17:24 -- common/autotest_common.sh@1111 -- # dm_mount 00:05:50.971 16:17:24 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:50.971 16:17:24 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:50.971 16:17:24 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:50.971 16:17:24 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:50.971 16:17:24 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:50.971 16:17:24 -- setup/common.sh@40 -- # local part_no=2 00:05:50.971 16:17:24 -- setup/common.sh@41 -- # local size=1073741824 00:05:50.971 16:17:24 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:50.971 16:17:24 -- setup/common.sh@44 -- # parts=() 00:05:50.971 16:17:24 -- setup/common.sh@44 -- # local parts 00:05:50.971 16:17:24 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:50.971 16:17:24 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:50.971 16:17:24 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:50.971 16:17:24 -- setup/common.sh@46 -- # (( part++ )) 00:05:50.971 16:17:24 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:50.971 16:17:24 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:50.971 16:17:24 -- setup/common.sh@46 -- # (( part++ )) 00:05:50.971 16:17:24 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:50.971 16:17:24 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:50.971 16:17:24 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:50.971 16:17:24 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:51.907 Creating new GPT entries in memory. 00:05:51.907 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:51.907 other utilities. 00:05:51.907 16:17:25 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:51.907 16:17:25 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:51.907 16:17:25 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:51.907 16:17:25 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:51.907 16:17:25 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:52.843 Creating new GPT entries in memory. 00:05:52.843 The operation has completed successfully. 00:05:52.843 16:17:26 -- setup/common.sh@57 -- # (( part++ )) 00:05:52.843 16:17:26 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:52.843 16:17:26 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:52.843 16:17:26 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:52.843 16:17:26 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:54.220 The operation has completed successfully. 00:05:54.220 16:17:27 -- setup/common.sh@57 -- # (( part++ )) 00:05:54.220 16:17:27 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:54.220 16:17:27 -- setup/common.sh@62 -- # wait 58972 00:05:54.220 16:17:27 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:54.220 16:17:27 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:54.220 16:17:27 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:54.220 16:17:27 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:54.220 16:17:27 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:54.220 16:17:27 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:54.220 16:17:27 -- setup/devices.sh@161 -- # break 00:05:54.220 16:17:27 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:54.220 16:17:27 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:54.220 16:17:27 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:54.220 16:17:27 -- setup/devices.sh@166 -- # dm=dm-0 00:05:54.220 16:17:27 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:54.220 16:17:27 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:54.220 16:17:27 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:54.220 16:17:27 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:54.220 16:17:27 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:54.220 16:17:27 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:54.220 16:17:27 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:54.220 16:17:27 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:54.220 16:17:27 -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:54.220 16:17:27 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:54.220 16:17:27 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:54.220 16:17:27 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:54.220 16:17:27 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:54.220 16:17:27 -- setup/devices.sh@53 -- # local found=0 00:05:54.220 16:17:27 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:54.220 16:17:27 -- setup/devices.sh@56 -- # : 00:05:54.220 16:17:27 -- setup/devices.sh@59 -- # local pci status 00:05:54.220 16:17:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.220 16:17:27 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:54.220 16:17:27 -- setup/devices.sh@47 -- # setup output config 00:05:54.220 16:17:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:54.220 16:17:27 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:54.220 16:17:28 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:54.220 16:17:28 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:54.220 16:17:28 -- setup/devices.sh@63 -- # found=1 00:05:54.220 16:17:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.220 16:17:28 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:54.220 16:17:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.480 16:17:28 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:54.480 16:17:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.480 16:17:28 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:54.480 16:17:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.480 16:17:28 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:54.480 16:17:28 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:54.480 16:17:28 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:54.480 16:17:28 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:54.480 16:17:28 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:54.480 16:17:28 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:54.480 16:17:28 -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:54.480 16:17:28 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:54.480 16:17:28 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:54.480 16:17:28 -- setup/devices.sh@50 -- # local mount_point= 00:05:54.480 16:17:28 -- setup/devices.sh@51 -- # local test_file= 00:05:54.480 16:17:28 -- setup/devices.sh@53 -- # local found=0 00:05:54.480 16:17:28 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:54.480 16:17:28 -- setup/devices.sh@59 -- # local pci status 00:05:54.480 16:17:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.480 16:17:28 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:54.480 16:17:28 -- setup/devices.sh@47 -- # setup output config 00:05:54.480 16:17:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:54.480 16:17:28 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:54.738 16:17:28 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:54.738 16:17:28 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:54.738 16:17:28 -- setup/devices.sh@63 -- # found=1 00:05:54.738 16:17:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.738 16:17:28 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:54.738 16:17:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.996 16:17:28 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:54.996 16:17:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.996 16:17:28 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:54.996 16:17:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.996 16:17:28 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:54.996 16:17:28 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:54.996 16:17:28 -- setup/devices.sh@68 -- # return 0 00:05:54.996 16:17:28 -- setup/devices.sh@187 -- # cleanup_dm 00:05:54.996 16:17:28 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:54.996 16:17:28 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:54.996 16:17:28 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:54.996 16:17:29 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:54.996 16:17:29 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:54.996 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:54.996 16:17:29 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:54.996 16:17:29 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:54.996 00:05:54.996 real 0m4.250s 00:05:54.996 user 0m0.478s 00:05:54.996 sys 0m0.733s 00:05:54.996 16:17:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:54.996 16:17:29 -- common/autotest_common.sh@10 -- # set +x 00:05:54.996 ************************************ 00:05:54.996 END TEST dm_mount 00:05:54.996 ************************************ 00:05:55.254 16:17:29 -- setup/devices.sh@1 -- # cleanup 00:05:55.254 16:17:29 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:55.254 16:17:29 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:55.254 16:17:29 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:55.254 16:17:29 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:55.254 16:17:29 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:55.254 16:17:29 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:55.513 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:55.513 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:55.513 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:55.513 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:55.513 16:17:29 -- setup/devices.sh@12 -- # cleanup_dm 00:05:55.513 16:17:29 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:55.513 16:17:29 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:55.513 16:17:29 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:55.513 16:17:29 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:55.513 16:17:29 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:55.513 16:17:29 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:55.513 00:05:55.513 real 0m9.959s 00:05:55.513 user 0m1.874s 00:05:55.513 sys 0m2.471s 00:05:55.513 16:17:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:55.513 16:17:29 -- common/autotest_common.sh@10 -- # set +x 00:05:55.513 ************************************ 00:05:55.513 END TEST devices 00:05:55.513 ************************************ 00:05:55.513 00:05:55.513 real 0m22.872s 00:05:55.513 user 0m7.395s 00:05:55.513 sys 0m9.768s 00:05:55.513 16:17:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:55.513 16:17:29 -- common/autotest_common.sh@10 -- # set +x 00:05:55.513 ************************************ 00:05:55.513 END TEST setup.sh 00:05:55.513 ************************************ 00:05:55.513 16:17:29 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:56.082 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:56.082 Hugepages 00:05:56.082 node hugesize free / total 00:05:56.082 node0 1048576kB 0 / 0 00:05:56.082 node0 2048kB 2048 / 2048 00:05:56.082 00:05:56.082 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:56.340 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:56.341 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:56.341 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:05:56.341 16:17:30 -- spdk/autotest.sh@130 -- # uname -s 00:05:56.341 16:17:30 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:56.341 16:17:30 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:56.341 16:17:30 -- common/autotest_common.sh@1517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:57.277 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:57.277 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:57.277 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:57.277 16:17:31 -- common/autotest_common.sh@1518 -- # sleep 1 00:05:58.214 16:17:32 -- common/autotest_common.sh@1519 -- # bdfs=() 00:05:58.214 16:17:32 -- common/autotest_common.sh@1519 -- # local bdfs 00:05:58.214 16:17:32 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:58.214 16:17:32 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:58.214 16:17:32 -- common/autotest_common.sh@1499 -- # bdfs=() 00:05:58.214 16:17:32 -- common/autotest_common.sh@1499 -- # local bdfs 00:05:58.214 16:17:32 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:58.214 16:17:32 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:05:58.214 16:17:32 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:58.473 16:17:32 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:05:58.473 16:17:32 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:58.473 16:17:32 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:58.731 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:58.731 Waiting for block devices as requested 00:05:58.731 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:58.991 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:58.991 16:17:32 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:58.991 16:17:32 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:58.991 16:17:32 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:58.991 16:17:32 -- common/autotest_common.sh@1488 -- # grep 0000:00:10.0/nvme/nvme 00:05:58.991 16:17:32 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:58.991 16:17:32 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:58.991 16:17:32 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:58.991 16:17:32 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme1 00:05:58.991 16:17:32 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:58.991 16:17:32 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:58.991 16:17:32 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:58.991 16:17:32 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:58.991 16:17:32 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:58.991 16:17:32 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:58.991 16:17:32 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:58.991 16:17:32 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:58.991 16:17:32 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:58.991 16:17:32 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:58.991 16:17:32 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:58.991 16:17:32 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:58.991 16:17:32 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:58.991 16:17:32 -- common/autotest_common.sh@1543 -- # continue 00:05:58.991 16:17:32 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:58.991 16:17:32 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:58.991 16:17:32 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:58.991 16:17:32 -- common/autotest_common.sh@1488 -- # grep 0000:00:11.0/nvme/nvme 00:05:58.991 16:17:32 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:58.991 16:17:32 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:58.991 16:17:32 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:58.991 16:17:32 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:05:58.991 16:17:32 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:58.991 16:17:32 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:58.991 16:17:32 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:58.991 16:17:32 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:58.991 16:17:32 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:58.991 16:17:32 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:58.991 16:17:32 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:58.991 16:17:32 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:58.991 16:17:32 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:58.991 16:17:32 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:58.991 16:17:32 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:58.991 16:17:32 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:58.991 16:17:32 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:58.991 16:17:32 -- common/autotest_common.sh@1543 -- # continue 00:05:58.991 16:17:32 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:58.991 16:17:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:58.991 16:17:32 -- common/autotest_common.sh@10 -- # set +x 00:05:58.991 16:17:32 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:58.991 16:17:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:58.991 16:17:32 -- common/autotest_common.sh@10 -- # set +x 00:05:58.991 16:17:32 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:59.935 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:59.935 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:59.935 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:59.935 16:17:33 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:59.935 16:17:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:59.935 16:17:33 -- common/autotest_common.sh@10 -- # set +x 00:05:59.935 16:17:33 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:59.935 16:17:33 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:05:59.935 16:17:33 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:05:59.935 16:17:33 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:59.935 16:17:33 -- common/autotest_common.sh@1563 -- # local bdfs 00:05:59.935 16:17:33 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:05:59.935 16:17:33 -- common/autotest_common.sh@1499 -- # bdfs=() 00:05:59.935 16:17:33 -- common/autotest_common.sh@1499 -- # local bdfs 00:05:59.935 16:17:33 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:59.935 16:17:33 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:59.935 16:17:33 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:05:59.935 16:17:33 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:05:59.935 16:17:33 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:59.935 16:17:33 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:05:59.935 16:17:33 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:59.935 16:17:33 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:59.935 16:17:33 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:59.935 16:17:33 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:05:59.935 16:17:33 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:59.935 16:17:33 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:59.935 16:17:33 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:59.935 16:17:33 -- common/autotest_common.sh@1572 -- # printf '%s\n' 00:05:59.935 16:17:33 -- common/autotest_common.sh@1578 -- # [[ -z '' ]] 00:05:59.935 16:17:33 -- common/autotest_common.sh@1579 -- # return 0 00:05:59.935 16:17:33 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:59.935 16:17:33 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:59.935 16:17:33 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:59.935 16:17:33 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:59.935 16:17:33 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:59.935 16:17:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:59.935 16:17:33 -- common/autotest_common.sh@10 -- # set +x 00:05:59.935 16:17:33 -- spdk/autotest.sh@164 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:59.935 16:17:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:59.935 16:17:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.935 16:17:33 -- common/autotest_common.sh@10 -- # set +x 00:06:00.194 ************************************ 00:06:00.194 START TEST env 00:06:00.194 ************************************ 00:06:00.194 16:17:34 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:00.194 * Looking for test storage... 00:06:00.194 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:00.194 16:17:34 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:00.194 16:17:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:00.194 16:17:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.194 16:17:34 -- common/autotest_common.sh@10 -- # set +x 00:06:00.194 ************************************ 00:06:00.194 START TEST env_memory 00:06:00.194 ************************************ 00:06:00.194 16:17:34 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:00.194 00:06:00.194 00:06:00.194 CUnit - A unit testing framework for C - Version 2.1-3 00:06:00.194 http://cunit.sourceforge.net/ 00:06:00.194 00:06:00.194 00:06:00.194 Suite: memory 00:06:00.452 Test: alloc and free memory map ...[2024-04-17 16:17:34.243976] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:00.452 passed 00:06:00.452 Test: mem map translation ...[2024-04-17 16:17:34.281470] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:00.452 [2024-04-17 16:17:34.281576] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:00.452 [2024-04-17 16:17:34.281659] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:00.452 [2024-04-17 16:17:34.281682] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:00.452 passed 00:06:00.452 Test: mem map registration ...[2024-04-17 16:17:34.353457] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:00.452 [2024-04-17 16:17:34.353518] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:00.452 passed 00:06:00.452 Test: mem map adjacent registrations ...passed 00:06:00.452 00:06:00.452 Run Summary: Type Total Ran Passed Failed Inactive 00:06:00.452 suites 1 1 n/a 0 0 00:06:00.452 tests 4 4 4 0 0 00:06:00.452 asserts 152 152 152 0 n/a 00:06:00.452 00:06:00.452 Elapsed time = 0.233 seconds 00:06:00.452 00:06:00.452 real 0m0.250s 00:06:00.452 user 0m0.232s 00:06:00.452 sys 0m0.016s 00:06:00.452 16:17:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:00.452 16:17:34 -- common/autotest_common.sh@10 -- # set +x 00:06:00.452 ************************************ 00:06:00.452 END TEST env_memory 00:06:00.452 ************************************ 00:06:00.452 16:17:34 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:00.452 16:17:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:00.452 16:17:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.452 16:17:34 -- common/autotest_common.sh@10 -- # set +x 00:06:00.711 ************************************ 00:06:00.711 START TEST env_vtophys 00:06:00.711 ************************************ 00:06:00.711 16:17:34 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:00.711 EAL: lib.eal log level changed from notice to debug 00:06:00.711 EAL: Detected lcore 0 as core 0 on socket 0 00:06:00.711 EAL: Detected lcore 1 as core 0 on socket 0 00:06:00.711 EAL: Detected lcore 2 as core 0 on socket 0 00:06:00.711 EAL: Detected lcore 3 as core 0 on socket 0 00:06:00.711 EAL: Detected lcore 4 as core 0 on socket 0 00:06:00.711 EAL: Detected lcore 5 as core 0 on socket 0 00:06:00.711 EAL: Detected lcore 6 as core 0 on socket 0 00:06:00.711 EAL: Detected lcore 7 as core 0 on socket 0 00:06:00.711 EAL: Detected lcore 8 as core 0 on socket 0 00:06:00.711 EAL: Detected lcore 9 as core 0 on socket 0 00:06:00.711 EAL: Maximum logical cores by configuration: 128 00:06:00.711 EAL: Detected CPU lcores: 10 00:06:00.711 EAL: Detected NUMA nodes: 1 00:06:00.711 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:06:00.711 EAL: Detected shared linkage of DPDK 00:06:00.711 EAL: No shared files mode enabled, IPC will be disabled 00:06:00.711 EAL: Selected IOVA mode 'PA' 00:06:00.711 EAL: Probing VFIO support... 00:06:00.711 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:00.711 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:00.711 EAL: Ask a virtual area of 0x2e000 bytes 00:06:00.711 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:00.711 EAL: Setting up physically contiguous memory... 00:06:00.711 EAL: Setting maximum number of open files to 524288 00:06:00.711 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:00.711 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:00.711 EAL: Ask a virtual area of 0x61000 bytes 00:06:00.711 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:00.711 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:00.711 EAL: Ask a virtual area of 0x400000000 bytes 00:06:00.711 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:00.711 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:00.711 EAL: Ask a virtual area of 0x61000 bytes 00:06:00.711 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:00.711 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:00.711 EAL: Ask a virtual area of 0x400000000 bytes 00:06:00.711 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:00.711 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:00.711 EAL: Ask a virtual area of 0x61000 bytes 00:06:00.711 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:00.711 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:00.711 EAL: Ask a virtual area of 0x400000000 bytes 00:06:00.711 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:00.711 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:00.711 EAL: Ask a virtual area of 0x61000 bytes 00:06:00.711 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:00.711 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:00.711 EAL: Ask a virtual area of 0x400000000 bytes 00:06:00.711 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:00.711 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:00.711 EAL: Hugepages will be freed exactly as allocated. 00:06:00.711 EAL: No shared files mode enabled, IPC is disabled 00:06:00.711 EAL: No shared files mode enabled, IPC is disabled 00:06:00.711 EAL: TSC frequency is ~2200000 KHz 00:06:00.711 EAL: Main lcore 0 is ready (tid=7fc95c003a00;cpuset=[0]) 00:06:00.711 EAL: Trying to obtain current memory policy. 00:06:00.711 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:00.711 EAL: Restoring previous memory policy: 0 00:06:00.711 EAL: request: mp_malloc_sync 00:06:00.711 EAL: No shared files mode enabled, IPC is disabled 00:06:00.711 EAL: Heap on socket 0 was expanded by 2MB 00:06:00.711 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:00.711 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:00.711 EAL: Mem event callback 'spdk:(nil)' registered 00:06:00.711 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:00.711 00:06:00.711 00:06:00.711 CUnit - A unit testing framework for C - Version 2.1-3 00:06:00.711 http://cunit.sourceforge.net/ 00:06:00.711 00:06:00.711 00:06:00.711 Suite: components_suite 00:06:00.711 Test: vtophys_malloc_test ...passed 00:06:00.711 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:00.711 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:00.711 EAL: Restoring previous memory policy: 4 00:06:00.711 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.711 EAL: request: mp_malloc_sync 00:06:00.711 EAL: No shared files mode enabled, IPC is disabled 00:06:00.711 EAL: Heap on socket 0 was expanded by 4MB 00:06:00.711 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.711 EAL: request: mp_malloc_sync 00:06:00.711 EAL: No shared files mode enabled, IPC is disabled 00:06:00.711 EAL: Heap on socket 0 was shrunk by 4MB 00:06:00.711 EAL: Trying to obtain current memory policy. 00:06:00.711 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:00.711 EAL: Restoring previous memory policy: 4 00:06:00.711 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.711 EAL: request: mp_malloc_sync 00:06:00.711 EAL: No shared files mode enabled, IPC is disabled 00:06:00.711 EAL: Heap on socket 0 was expanded by 6MB 00:06:00.711 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.711 EAL: request: mp_malloc_sync 00:06:00.711 EAL: No shared files mode enabled, IPC is disabled 00:06:00.711 EAL: Heap on socket 0 was shrunk by 6MB 00:06:00.711 EAL: Trying to obtain current memory policy. 00:06:00.711 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:00.711 EAL: Restoring previous memory policy: 4 00:06:00.711 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.711 EAL: request: mp_malloc_sync 00:06:00.711 EAL: No shared files mode enabled, IPC is disabled 00:06:00.711 EAL: Heap on socket 0 was expanded by 10MB 00:06:00.711 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.711 EAL: request: mp_malloc_sync 00:06:00.711 EAL: No shared files mode enabled, IPC is disabled 00:06:00.711 EAL: Heap on socket 0 was shrunk by 10MB 00:06:00.711 EAL: Trying to obtain current memory policy. 00:06:00.711 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:00.711 EAL: Restoring previous memory policy: 4 00:06:00.711 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.711 EAL: request: mp_malloc_sync 00:06:00.711 EAL: No shared files mode enabled, IPC is disabled 00:06:00.711 EAL: Heap on socket 0 was expanded by 18MB 00:06:00.711 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.711 EAL: request: mp_malloc_sync 00:06:00.711 EAL: No shared files mode enabled, IPC is disabled 00:06:00.711 EAL: Heap on socket 0 was shrunk by 18MB 00:06:00.711 EAL: Trying to obtain current memory policy. 00:06:00.711 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:00.711 EAL: Restoring previous memory policy: 4 00:06:00.711 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.711 EAL: request: mp_malloc_sync 00:06:00.711 EAL: No shared files mode enabled, IPC is disabled 00:06:00.711 EAL: Heap on socket 0 was expanded by 34MB 00:06:00.711 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.970 EAL: request: mp_malloc_sync 00:06:00.970 EAL: No shared files mode enabled, IPC is disabled 00:06:00.970 EAL: Heap on socket 0 was shrunk by 34MB 00:06:00.970 EAL: Trying to obtain current memory policy. 00:06:00.970 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:00.970 EAL: Restoring previous memory policy: 4 00:06:00.970 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.970 EAL: request: mp_malloc_sync 00:06:00.970 EAL: No shared files mode enabled, IPC is disabled 00:06:00.970 EAL: Heap on socket 0 was expanded by 66MB 00:06:00.970 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.970 EAL: request: mp_malloc_sync 00:06:00.970 EAL: No shared files mode enabled, IPC is disabled 00:06:00.970 EAL: Heap on socket 0 was shrunk by 66MB 00:06:00.970 EAL: Trying to obtain current memory policy. 00:06:00.970 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:00.970 EAL: Restoring previous memory policy: 4 00:06:00.970 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.970 EAL: request: mp_malloc_sync 00:06:00.970 EAL: No shared files mode enabled, IPC is disabled 00:06:00.970 EAL: Heap on socket 0 was expanded by 130MB 00:06:00.970 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.970 EAL: request: mp_malloc_sync 00:06:00.970 EAL: No shared files mode enabled, IPC is disabled 00:06:00.970 EAL: Heap on socket 0 was shrunk by 130MB 00:06:00.970 EAL: Trying to obtain current memory policy. 00:06:00.970 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:00.970 EAL: Restoring previous memory policy: 4 00:06:00.970 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.970 EAL: request: mp_malloc_sync 00:06:00.970 EAL: No shared files mode enabled, IPC is disabled 00:06:00.970 EAL: Heap on socket 0 was expanded by 258MB 00:06:00.970 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.229 EAL: request: mp_malloc_sync 00:06:01.229 EAL: No shared files mode enabled, IPC is disabled 00:06:01.229 EAL: Heap on socket 0 was shrunk by 258MB 00:06:01.229 EAL: Trying to obtain current memory policy. 00:06:01.229 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:01.229 EAL: Restoring previous memory policy: 4 00:06:01.229 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.229 EAL: request: mp_malloc_sync 00:06:01.229 EAL: No shared files mode enabled, IPC is disabled 00:06:01.229 EAL: Heap on socket 0 was expanded by 514MB 00:06:01.487 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.487 EAL: request: mp_malloc_sync 00:06:01.487 EAL: No shared files mode enabled, IPC is disabled 00:06:01.487 EAL: Heap on socket 0 was shrunk by 514MB 00:06:01.487 EAL: Trying to obtain current memory policy. 00:06:01.487 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:01.746 EAL: Restoring previous memory policy: 4 00:06:01.746 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.746 EAL: request: mp_malloc_sync 00:06:01.746 EAL: No shared files mode enabled, IPC is disabled 00:06:01.746 EAL: Heap on socket 0 was expanded by 1026MB 00:06:02.006 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.006 passed 00:06:02.006 00:06:02.006 Run Summary: Type Total Ran Passed Failed Inactive 00:06:02.006 suites 1 1 n/a 0 0 00:06:02.006 tests 2 2 2 0 0 00:06:02.006 asserts 5330 5330 5330 0 n/a 00:06:02.006 00:06:02.006 Elapsed time = 1.281 seconds 00:06:02.006 EAL: request: mp_malloc_sync 00:06:02.006 EAL: No shared files mode enabled, IPC is disabled 00:06:02.006 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:02.006 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.006 EAL: request: mp_malloc_sync 00:06:02.006 EAL: No shared files mode enabled, IPC is disabled 00:06:02.006 EAL: Heap on socket 0 was shrunk by 2MB 00:06:02.006 EAL: No shared files mode enabled, IPC is disabled 00:06:02.006 EAL: No shared files mode enabled, IPC is disabled 00:06:02.006 EAL: No shared files mode enabled, IPC is disabled 00:06:02.006 00:06:02.006 real 0m1.480s 00:06:02.006 user 0m0.801s 00:06:02.006 sys 0m0.545s 00:06:02.006 16:17:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:02.006 16:17:36 -- common/autotest_common.sh@10 -- # set +x 00:06:02.006 ************************************ 00:06:02.006 END TEST env_vtophys 00:06:02.006 ************************************ 00:06:02.265 16:17:36 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:02.265 16:17:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:02.265 16:17:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.265 16:17:36 -- common/autotest_common.sh@10 -- # set +x 00:06:02.265 ************************************ 00:06:02.265 START TEST env_pci 00:06:02.265 ************************************ 00:06:02.265 16:17:36 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:02.265 00:06:02.265 00:06:02.265 CUnit - A unit testing framework for C - Version 2.1-3 00:06:02.265 http://cunit.sourceforge.net/ 00:06:02.265 00:06:02.265 00:06:02.265 Suite: pci 00:06:02.265 Test: pci_hook ...[2024-04-17 16:17:36.174237] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 60235 has claimed it 00:06:02.265 passed 00:06:02.265 00:06:02.265 Run Summary: Type Total Ran Passed Failed Inactive 00:06:02.265 suites 1 1 n/a 0 0 00:06:02.266 tests 1 1 1 0 0 00:06:02.266 asserts 25 25 25 0 n/a 00:06:02.266 00:06:02.266 Elapsed time = 0.002 seconds 00:06:02.266 EAL: Cannot find device (10000:00:01.0) 00:06:02.266 EAL: Failed to attach device on primary process 00:06:02.266 00:06:02.266 real 0m0.020s 00:06:02.266 user 0m0.009s 00:06:02.266 sys 0m0.011s 00:06:02.266 16:17:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:02.266 16:17:36 -- common/autotest_common.sh@10 -- # set +x 00:06:02.266 ************************************ 00:06:02.266 END TEST env_pci 00:06:02.266 ************************************ 00:06:02.266 16:17:36 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:02.266 16:17:36 -- env/env.sh@15 -- # uname 00:06:02.266 16:17:36 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:02.266 16:17:36 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:02.266 16:17:36 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:02.266 16:17:36 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:06:02.266 16:17:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.266 16:17:36 -- common/autotest_common.sh@10 -- # set +x 00:06:02.266 ************************************ 00:06:02.266 START TEST env_dpdk_post_init 00:06:02.266 ************************************ 00:06:02.266 16:17:36 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:02.525 EAL: Detected CPU lcores: 10 00:06:02.525 EAL: Detected NUMA nodes: 1 00:06:02.525 EAL: Detected shared linkage of DPDK 00:06:02.525 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:02.525 EAL: Selected IOVA mode 'PA' 00:06:02.525 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:02.525 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:02.525 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:02.525 Starting DPDK initialization... 00:06:02.525 Starting SPDK post initialization... 00:06:02.525 SPDK NVMe probe 00:06:02.525 Attaching to 0000:00:10.0 00:06:02.525 Attaching to 0000:00:11.0 00:06:02.525 Attached to 0000:00:10.0 00:06:02.525 Attached to 0000:00:11.0 00:06:02.525 Cleaning up... 00:06:02.525 00:06:02.525 real 0m0.182s 00:06:02.525 user 0m0.047s 00:06:02.525 sys 0m0.035s 00:06:02.525 16:17:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:02.525 16:17:36 -- common/autotest_common.sh@10 -- # set +x 00:06:02.525 ************************************ 00:06:02.525 END TEST env_dpdk_post_init 00:06:02.525 ************************************ 00:06:02.525 16:17:36 -- env/env.sh@26 -- # uname 00:06:02.525 16:17:36 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:02.525 16:17:36 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:02.525 16:17:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:02.525 16:17:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.525 16:17:36 -- common/autotest_common.sh@10 -- # set +x 00:06:02.785 ************************************ 00:06:02.785 START TEST env_mem_callbacks 00:06:02.785 ************************************ 00:06:02.785 16:17:36 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:02.785 EAL: Detected CPU lcores: 10 00:06:02.785 EAL: Detected NUMA nodes: 1 00:06:02.785 EAL: Detected shared linkage of DPDK 00:06:02.785 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:02.785 EAL: Selected IOVA mode 'PA' 00:06:02.785 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:02.785 00:06:02.785 00:06:02.785 CUnit - A unit testing framework for C - Version 2.1-3 00:06:02.785 http://cunit.sourceforge.net/ 00:06:02.785 00:06:02.785 00:06:02.785 Suite: memory 00:06:02.785 Test: test ... 00:06:02.785 register 0x200000200000 2097152 00:06:02.785 malloc 3145728 00:06:02.785 register 0x200000400000 4194304 00:06:02.785 buf 0x200000500000 len 3145728 PASSED 00:06:02.785 malloc 64 00:06:02.785 buf 0x2000004fff40 len 64 PASSED 00:06:02.785 malloc 4194304 00:06:02.785 register 0x200000800000 6291456 00:06:02.785 buf 0x200000a00000 len 4194304 PASSED 00:06:02.785 free 0x200000500000 3145728 00:06:02.785 free 0x2000004fff40 64 00:06:02.785 unregister 0x200000400000 4194304 PASSED 00:06:02.785 free 0x200000a00000 4194304 00:06:02.785 unregister 0x200000800000 6291456 PASSED 00:06:02.785 malloc 8388608 00:06:02.785 register 0x200000400000 10485760 00:06:02.785 buf 0x200000600000 len 8388608 PASSED 00:06:02.785 free 0x200000600000 8388608 00:06:02.785 unregister 0x200000400000 10485760 PASSED 00:06:02.785 passed 00:06:02.785 00:06:02.785 Run Summary: Type Total Ran Passed Failed Inactive 00:06:02.785 suites 1 1 n/a 0 0 00:06:02.785 tests 1 1 1 0 0 00:06:02.785 asserts 15 15 15 0 n/a 00:06:02.785 00:06:02.785 Elapsed time = 0.007 seconds 00:06:02.785 00:06:02.785 real 0m0.140s 00:06:02.785 user 0m0.011s 00:06:02.785 sys 0m0.029s 00:06:02.785 16:17:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:02.785 16:17:36 -- common/autotest_common.sh@10 -- # set +x 00:06:02.785 ************************************ 00:06:02.785 END TEST env_mem_callbacks 00:06:02.785 ************************************ 00:06:02.785 00:06:02.785 real 0m2.751s 00:06:02.785 user 0m1.331s 00:06:02.785 sys 0m1.010s 00:06:02.785 16:17:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:02.785 16:17:36 -- common/autotest_common.sh@10 -- # set +x 00:06:02.785 ************************************ 00:06:02.785 END TEST env 00:06:02.785 ************************************ 00:06:02.785 16:17:36 -- spdk/autotest.sh@165 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:02.785 16:17:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:02.785 16:17:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.785 16:17:36 -- common/autotest_common.sh@10 -- # set +x 00:06:03.044 ************************************ 00:06:03.044 START TEST rpc 00:06:03.044 ************************************ 00:06:03.044 16:17:36 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:03.044 * Looking for test storage... 00:06:03.044 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:03.044 16:17:36 -- rpc/rpc.sh@65 -- # spdk_pid=60364 00:06:03.044 16:17:36 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:03.044 16:17:36 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:03.044 16:17:36 -- rpc/rpc.sh@67 -- # waitforlisten 60364 00:06:03.044 16:17:36 -- common/autotest_common.sh@817 -- # '[' -z 60364 ']' 00:06:03.044 16:17:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.044 16:17:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:03.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.044 16:17:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.044 16:17:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:03.044 16:17:36 -- common/autotest_common.sh@10 -- # set +x 00:06:03.044 [2024-04-17 16:17:37.017740] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:06:03.044 [2024-04-17 16:17:37.017891] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60364 ] 00:06:03.303 [2024-04-17 16:17:37.157715] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.303 [2024-04-17 16:17:37.273088] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:03.303 [2024-04-17 16:17:37.273145] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 60364' to capture a snapshot of events at runtime. 00:06:03.303 [2024-04-17 16:17:37.273157] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:03.303 [2024-04-17 16:17:37.273165] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:03.303 [2024-04-17 16:17:37.273173] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid60364 for offline analysis/debug. 00:06:03.303 [2024-04-17 16:17:37.273206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.301 16:17:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:04.301 16:17:38 -- common/autotest_common.sh@850 -- # return 0 00:06:04.301 16:17:38 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:04.301 16:17:38 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:04.301 16:17:38 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:04.301 16:17:38 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:04.301 16:17:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:04.301 16:17:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.301 16:17:38 -- common/autotest_common.sh@10 -- # set +x 00:06:04.301 ************************************ 00:06:04.301 START TEST rpc_integrity 00:06:04.301 ************************************ 00:06:04.301 16:17:38 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:06:04.301 16:17:38 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:04.301 16:17:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:04.301 16:17:38 -- common/autotest_common.sh@10 -- # set +x 00:06:04.301 16:17:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:04.301 16:17:38 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:04.301 16:17:38 -- rpc/rpc.sh@13 -- # jq length 00:06:04.301 16:17:38 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:04.301 16:17:38 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:04.301 16:17:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:04.301 16:17:38 -- common/autotest_common.sh@10 -- # set +x 00:06:04.301 16:17:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:04.301 16:17:38 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:04.301 16:17:38 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:04.301 16:17:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:04.301 16:17:38 -- common/autotest_common.sh@10 -- # set +x 00:06:04.301 16:17:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:04.301 16:17:38 -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:04.301 { 00:06:04.301 "aliases": [ 00:06:04.301 "3ce674e1-e5af-423c-ae75-d6b80ff0ac31" 00:06:04.301 ], 00:06:04.301 "assigned_rate_limits": { 00:06:04.301 "r_mbytes_per_sec": 0, 00:06:04.301 "rw_ios_per_sec": 0, 00:06:04.301 "rw_mbytes_per_sec": 0, 00:06:04.301 "w_mbytes_per_sec": 0 00:06:04.301 }, 00:06:04.301 "block_size": 512, 00:06:04.301 "claimed": false, 00:06:04.301 "driver_specific": {}, 00:06:04.301 "memory_domains": [ 00:06:04.301 { 00:06:04.301 "dma_device_id": "system", 00:06:04.301 "dma_device_type": 1 00:06:04.301 }, 00:06:04.301 { 00:06:04.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:04.301 "dma_device_type": 2 00:06:04.301 } 00:06:04.301 ], 00:06:04.301 "name": "Malloc0", 00:06:04.301 "num_blocks": 16384, 00:06:04.301 "product_name": "Malloc disk", 00:06:04.301 "supported_io_types": { 00:06:04.301 "abort": true, 00:06:04.301 "compare": false, 00:06:04.301 "compare_and_write": false, 00:06:04.301 "flush": true, 00:06:04.301 "nvme_admin": false, 00:06:04.301 "nvme_io": false, 00:06:04.301 "read": true, 00:06:04.301 "reset": true, 00:06:04.301 "unmap": true, 00:06:04.301 "write": true, 00:06:04.301 "write_zeroes": true 00:06:04.301 }, 00:06:04.301 "uuid": "3ce674e1-e5af-423c-ae75-d6b80ff0ac31", 00:06:04.301 "zoned": false 00:06:04.301 } 00:06:04.301 ]' 00:06:04.301 16:17:38 -- rpc/rpc.sh@17 -- # jq length 00:06:04.301 16:17:38 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:04.301 16:17:38 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:04.301 16:17:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:04.301 16:17:38 -- common/autotest_common.sh@10 -- # set +x 00:06:04.302 [2024-04-17 16:17:38.279799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:04.302 [2024-04-17 16:17:38.279855] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:04.302 [2024-04-17 16:17:38.279875] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xecab10 00:06:04.302 [2024-04-17 16:17:38.279885] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:04.302 [2024-04-17 16:17:38.281653] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:04.302 [2024-04-17 16:17:38.281690] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:04.302 Passthru0 00:06:04.302 16:17:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:04.302 16:17:38 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:04.302 16:17:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:04.302 16:17:38 -- common/autotest_common.sh@10 -- # set +x 00:06:04.302 16:17:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:04.302 16:17:38 -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:04.302 { 00:06:04.302 "aliases": [ 00:06:04.302 "3ce674e1-e5af-423c-ae75-d6b80ff0ac31" 00:06:04.302 ], 00:06:04.302 "assigned_rate_limits": { 00:06:04.302 "r_mbytes_per_sec": 0, 00:06:04.302 "rw_ios_per_sec": 0, 00:06:04.302 "rw_mbytes_per_sec": 0, 00:06:04.302 "w_mbytes_per_sec": 0 00:06:04.302 }, 00:06:04.302 "block_size": 512, 00:06:04.302 "claim_type": "exclusive_write", 00:06:04.302 "claimed": true, 00:06:04.302 "driver_specific": {}, 00:06:04.302 "memory_domains": [ 00:06:04.302 { 00:06:04.302 "dma_device_id": "system", 00:06:04.302 "dma_device_type": 1 00:06:04.302 }, 00:06:04.302 { 00:06:04.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:04.302 "dma_device_type": 2 00:06:04.302 } 00:06:04.302 ], 00:06:04.302 "name": "Malloc0", 00:06:04.302 "num_blocks": 16384, 00:06:04.302 "product_name": "Malloc disk", 00:06:04.302 "supported_io_types": { 00:06:04.302 "abort": true, 00:06:04.302 "compare": false, 00:06:04.302 "compare_and_write": false, 00:06:04.302 "flush": true, 00:06:04.302 "nvme_admin": false, 00:06:04.302 "nvme_io": false, 00:06:04.302 "read": true, 00:06:04.302 "reset": true, 00:06:04.302 "unmap": true, 00:06:04.302 "write": true, 00:06:04.302 "write_zeroes": true 00:06:04.302 }, 00:06:04.302 "uuid": "3ce674e1-e5af-423c-ae75-d6b80ff0ac31", 00:06:04.302 "zoned": false 00:06:04.302 }, 00:06:04.302 { 00:06:04.302 "aliases": [ 00:06:04.302 "c6310c40-8f65-5a26-a37c-5f00b68bf927" 00:06:04.302 ], 00:06:04.302 "assigned_rate_limits": { 00:06:04.302 "r_mbytes_per_sec": 0, 00:06:04.302 "rw_ios_per_sec": 0, 00:06:04.302 "rw_mbytes_per_sec": 0, 00:06:04.302 "w_mbytes_per_sec": 0 00:06:04.302 }, 00:06:04.302 "block_size": 512, 00:06:04.302 "claimed": false, 00:06:04.302 "driver_specific": { 00:06:04.302 "passthru": { 00:06:04.302 "base_bdev_name": "Malloc0", 00:06:04.302 "name": "Passthru0" 00:06:04.302 } 00:06:04.302 }, 00:06:04.302 "memory_domains": [ 00:06:04.302 { 00:06:04.302 "dma_device_id": "system", 00:06:04.302 "dma_device_type": 1 00:06:04.302 }, 00:06:04.302 { 00:06:04.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:04.302 "dma_device_type": 2 00:06:04.302 } 00:06:04.302 ], 00:06:04.302 "name": "Passthru0", 00:06:04.302 "num_blocks": 16384, 00:06:04.302 "product_name": "passthru", 00:06:04.302 "supported_io_types": { 00:06:04.302 "abort": true, 00:06:04.302 "compare": false, 00:06:04.302 "compare_and_write": false, 00:06:04.302 "flush": true, 00:06:04.302 "nvme_admin": false, 00:06:04.302 "nvme_io": false, 00:06:04.302 "read": true, 00:06:04.302 "reset": true, 00:06:04.302 "unmap": true, 00:06:04.302 "write": true, 00:06:04.302 "write_zeroes": true 00:06:04.302 }, 00:06:04.302 "uuid": "c6310c40-8f65-5a26-a37c-5f00b68bf927", 00:06:04.302 "zoned": false 00:06:04.302 } 00:06:04.302 ]' 00:06:04.302 16:17:38 -- rpc/rpc.sh@21 -- # jq length 00:06:04.563 16:17:38 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:04.563 16:17:38 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:04.563 16:17:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:04.563 16:17:38 -- common/autotest_common.sh@10 -- # set +x 00:06:04.563 16:17:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:04.563 16:17:38 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:04.563 16:17:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:04.563 16:17:38 -- common/autotest_common.sh@10 -- # set +x 00:06:04.563 16:17:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:04.563 16:17:38 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:04.563 16:17:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:04.563 16:17:38 -- common/autotest_common.sh@10 -- # set +x 00:06:04.563 16:17:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:04.563 16:17:38 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:04.563 16:17:38 -- rpc/rpc.sh@26 -- # jq length 00:06:04.563 16:17:38 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:04.563 00:06:04.563 real 0m0.320s 00:06:04.563 user 0m0.211s 00:06:04.563 sys 0m0.038s 00:06:04.563 16:17:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:04.563 16:17:38 -- common/autotest_common.sh@10 -- # set +x 00:06:04.563 ************************************ 00:06:04.563 END TEST rpc_integrity 00:06:04.563 ************************************ 00:06:04.563 16:17:38 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:04.563 16:17:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:04.563 16:17:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.563 16:17:38 -- common/autotest_common.sh@10 -- # set +x 00:06:04.563 ************************************ 00:06:04.563 START TEST rpc_plugins 00:06:04.563 ************************************ 00:06:04.563 16:17:38 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:06:04.563 16:17:38 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:04.563 16:17:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:04.563 16:17:38 -- common/autotest_common.sh@10 -- # set +x 00:06:04.563 16:17:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:04.563 16:17:38 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:04.563 16:17:38 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:04.563 16:17:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:04.563 16:17:38 -- common/autotest_common.sh@10 -- # set +x 00:06:04.563 16:17:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:04.563 16:17:38 -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:04.563 { 00:06:04.563 "aliases": [ 00:06:04.563 "85b247bc-ddf1-49c0-92a8-fdf72957edca" 00:06:04.563 ], 00:06:04.563 "assigned_rate_limits": { 00:06:04.563 "r_mbytes_per_sec": 0, 00:06:04.563 "rw_ios_per_sec": 0, 00:06:04.563 "rw_mbytes_per_sec": 0, 00:06:04.563 "w_mbytes_per_sec": 0 00:06:04.563 }, 00:06:04.563 "block_size": 4096, 00:06:04.563 "claimed": false, 00:06:04.563 "driver_specific": {}, 00:06:04.563 "memory_domains": [ 00:06:04.563 { 00:06:04.563 "dma_device_id": "system", 00:06:04.563 "dma_device_type": 1 00:06:04.563 }, 00:06:04.563 { 00:06:04.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:04.563 "dma_device_type": 2 00:06:04.563 } 00:06:04.563 ], 00:06:04.563 "name": "Malloc1", 00:06:04.563 "num_blocks": 256, 00:06:04.563 "product_name": "Malloc disk", 00:06:04.563 "supported_io_types": { 00:06:04.563 "abort": true, 00:06:04.563 "compare": false, 00:06:04.563 "compare_and_write": false, 00:06:04.563 "flush": true, 00:06:04.563 "nvme_admin": false, 00:06:04.563 "nvme_io": false, 00:06:04.563 "read": true, 00:06:04.563 "reset": true, 00:06:04.563 "unmap": true, 00:06:04.563 "write": true, 00:06:04.563 "write_zeroes": true 00:06:04.563 }, 00:06:04.563 "uuid": "85b247bc-ddf1-49c0-92a8-fdf72957edca", 00:06:04.563 "zoned": false 00:06:04.563 } 00:06:04.563 ]' 00:06:04.563 16:17:38 -- rpc/rpc.sh@32 -- # jq length 00:06:04.822 16:17:38 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:04.822 16:17:38 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:04.822 16:17:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:04.822 16:17:38 -- common/autotest_common.sh@10 -- # set +x 00:06:04.822 16:17:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:04.822 16:17:38 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:04.822 16:17:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:04.822 16:17:38 -- common/autotest_common.sh@10 -- # set +x 00:06:04.822 16:17:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:04.822 16:17:38 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:04.822 16:17:38 -- rpc/rpc.sh@36 -- # jq length 00:06:04.822 16:17:38 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:04.822 00:06:04.822 real 0m0.165s 00:06:04.822 user 0m0.116s 00:06:04.822 sys 0m0.015s 00:06:04.822 16:17:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:04.822 ************************************ 00:06:04.822 16:17:38 -- common/autotest_common.sh@10 -- # set +x 00:06:04.822 END TEST rpc_plugins 00:06:04.822 ************************************ 00:06:04.822 16:17:38 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:04.822 16:17:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:04.822 16:17:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.822 16:17:38 -- common/autotest_common.sh@10 -- # set +x 00:06:04.822 ************************************ 00:06:04.822 START TEST rpc_trace_cmd_test 00:06:04.822 ************************************ 00:06:04.822 16:17:38 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:06:04.822 16:17:38 -- rpc/rpc.sh@40 -- # local info 00:06:04.822 16:17:38 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:04.822 16:17:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:04.822 16:17:38 -- common/autotest_common.sh@10 -- # set +x 00:06:04.822 16:17:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:04.822 16:17:38 -- rpc/rpc.sh@42 -- # info='{ 00:06:04.822 "bdev": { 00:06:04.822 "mask": "0x8", 00:06:04.822 "tpoint_mask": "0xffffffffffffffff" 00:06:04.822 }, 00:06:04.822 "bdev_nvme": { 00:06:04.822 "mask": "0x4000", 00:06:04.822 "tpoint_mask": "0x0" 00:06:04.822 }, 00:06:04.822 "blobfs": { 00:06:04.822 "mask": "0x80", 00:06:04.822 "tpoint_mask": "0x0" 00:06:04.822 }, 00:06:04.822 "dsa": { 00:06:04.822 "mask": "0x200", 00:06:04.822 "tpoint_mask": "0x0" 00:06:04.822 }, 00:06:04.822 "ftl": { 00:06:04.822 "mask": "0x40", 00:06:04.822 "tpoint_mask": "0x0" 00:06:04.822 }, 00:06:04.822 "iaa": { 00:06:04.822 "mask": "0x1000", 00:06:04.822 "tpoint_mask": "0x0" 00:06:04.822 }, 00:06:04.822 "iscsi_conn": { 00:06:04.822 "mask": "0x2", 00:06:04.822 "tpoint_mask": "0x0" 00:06:04.822 }, 00:06:04.822 "nvme_pcie": { 00:06:04.822 "mask": "0x800", 00:06:04.822 "tpoint_mask": "0x0" 00:06:04.822 }, 00:06:04.822 "nvme_tcp": { 00:06:04.822 "mask": "0x2000", 00:06:04.822 "tpoint_mask": "0x0" 00:06:04.822 }, 00:06:04.822 "nvmf_rdma": { 00:06:04.822 "mask": "0x10", 00:06:04.822 "tpoint_mask": "0x0" 00:06:04.822 }, 00:06:04.822 "nvmf_tcp": { 00:06:04.822 "mask": "0x20", 00:06:04.822 "tpoint_mask": "0x0" 00:06:04.822 }, 00:06:04.822 "scsi": { 00:06:04.822 "mask": "0x4", 00:06:04.822 "tpoint_mask": "0x0" 00:06:04.822 }, 00:06:04.822 "sock": { 00:06:04.822 "mask": "0x8000", 00:06:04.822 "tpoint_mask": "0x0" 00:06:04.822 }, 00:06:04.822 "thread": { 00:06:04.822 "mask": "0x400", 00:06:04.822 "tpoint_mask": "0x0" 00:06:04.822 }, 00:06:04.822 "tpoint_group_mask": "0x8", 00:06:04.822 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid60364" 00:06:04.822 }' 00:06:04.822 16:17:38 -- rpc/rpc.sh@43 -- # jq length 00:06:05.080 16:17:38 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:05.080 16:17:38 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:05.080 16:17:38 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:05.080 16:17:38 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:05.080 16:17:39 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:05.080 16:17:39 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:05.080 16:17:39 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:05.080 16:17:39 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:05.339 16:17:39 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:05.339 00:06:05.339 real 0m0.299s 00:06:05.339 user 0m0.256s 00:06:05.339 sys 0m0.031s 00:06:05.339 16:17:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:05.339 16:17:39 -- common/autotest_common.sh@10 -- # set +x 00:06:05.339 ************************************ 00:06:05.339 END TEST rpc_trace_cmd_test 00:06:05.339 ************************************ 00:06:05.339 16:17:39 -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:06:05.339 16:17:39 -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:06:05.339 16:17:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:05.339 16:17:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.339 16:17:39 -- common/autotest_common.sh@10 -- # set +x 00:06:05.339 ************************************ 00:06:05.339 START TEST go_rpc 00:06:05.339 ************************************ 00:06:05.339 16:17:39 -- common/autotest_common.sh@1111 -- # go_rpc 00:06:05.339 16:17:39 -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:06:05.339 16:17:39 -- rpc/rpc.sh@51 -- # bdevs='[]' 00:06:05.339 16:17:39 -- rpc/rpc.sh@52 -- # jq length 00:06:05.339 16:17:39 -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:06:05.339 16:17:39 -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:06:05.339 16:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:05.339 16:17:39 -- common/autotest_common.sh@10 -- # set +x 00:06:05.339 16:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:05.339 16:17:39 -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:06:05.339 16:17:39 -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:06:05.339 16:17:39 -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["c2560e9e-6381-4284-8b6a-8980794e6e1f"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"c2560e9e-6381-4284-8b6a-8980794e6e1f","zoned":false}]' 00:06:05.339 16:17:39 -- rpc/rpc.sh@57 -- # jq length 00:06:05.598 16:17:39 -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:06:05.598 16:17:39 -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:05.598 16:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:05.598 16:17:39 -- common/autotest_common.sh@10 -- # set +x 00:06:05.598 16:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:05.598 16:17:39 -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:06:05.598 16:17:39 -- rpc/rpc.sh@60 -- # bdevs='[]' 00:06:05.598 16:17:39 -- rpc/rpc.sh@61 -- # jq length 00:06:05.598 16:17:39 -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:06:05.598 00:06:05.598 real 0m0.226s 00:06:05.598 user 0m0.161s 00:06:05.598 sys 0m0.032s 00:06:05.598 16:17:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:05.598 16:17:39 -- common/autotest_common.sh@10 -- # set +x 00:06:05.598 ************************************ 00:06:05.598 END TEST go_rpc 00:06:05.598 ************************************ 00:06:05.598 16:17:39 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:05.598 16:17:39 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:05.598 16:17:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:05.598 16:17:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.598 16:17:39 -- common/autotest_common.sh@10 -- # set +x 00:06:05.598 ************************************ 00:06:05.598 START TEST rpc_daemon_integrity 00:06:05.598 ************************************ 00:06:05.598 16:17:39 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:06:05.598 16:17:39 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:05.598 16:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:05.598 16:17:39 -- common/autotest_common.sh@10 -- # set +x 00:06:05.598 16:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:05.598 16:17:39 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:05.598 16:17:39 -- rpc/rpc.sh@13 -- # jq length 00:06:05.857 16:17:39 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:05.857 16:17:39 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:05.857 16:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:05.857 16:17:39 -- common/autotest_common.sh@10 -- # set +x 00:06:05.857 16:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:05.857 16:17:39 -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:06:05.857 16:17:39 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:05.857 16:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:05.857 16:17:39 -- common/autotest_common.sh@10 -- # set +x 00:06:05.857 16:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:05.857 16:17:39 -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:05.857 { 00:06:05.857 "aliases": [ 00:06:05.857 "d0e89745-2eca-4b63-8635-79f948eb5f21" 00:06:05.857 ], 00:06:05.857 "assigned_rate_limits": { 00:06:05.857 "r_mbytes_per_sec": 0, 00:06:05.857 "rw_ios_per_sec": 0, 00:06:05.857 "rw_mbytes_per_sec": 0, 00:06:05.857 "w_mbytes_per_sec": 0 00:06:05.857 }, 00:06:05.857 "block_size": 512, 00:06:05.857 "claimed": false, 00:06:05.857 "driver_specific": {}, 00:06:05.857 "memory_domains": [ 00:06:05.857 { 00:06:05.857 "dma_device_id": "system", 00:06:05.857 "dma_device_type": 1 00:06:05.857 }, 00:06:05.857 { 00:06:05.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.857 "dma_device_type": 2 00:06:05.857 } 00:06:05.857 ], 00:06:05.857 "name": "Malloc3", 00:06:05.857 "num_blocks": 16384, 00:06:05.857 "product_name": "Malloc disk", 00:06:05.857 "supported_io_types": { 00:06:05.857 "abort": true, 00:06:05.857 "compare": false, 00:06:05.857 "compare_and_write": false, 00:06:05.857 "flush": true, 00:06:05.857 "nvme_admin": false, 00:06:05.857 "nvme_io": false, 00:06:05.857 "read": true, 00:06:05.857 "reset": true, 00:06:05.857 "unmap": true, 00:06:05.857 "write": true, 00:06:05.857 "write_zeroes": true 00:06:05.857 }, 00:06:05.857 "uuid": "d0e89745-2eca-4b63-8635-79f948eb5f21", 00:06:05.857 "zoned": false 00:06:05.857 } 00:06:05.857 ]' 00:06:05.857 16:17:39 -- rpc/rpc.sh@17 -- # jq length 00:06:05.857 16:17:39 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:05.857 16:17:39 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:06:05.857 16:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:05.857 16:17:39 -- common/autotest_common.sh@10 -- # set +x 00:06:05.857 [2024-04-17 16:17:39.753266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:05.857 [2024-04-17 16:17:39.753324] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:05.857 [2024-04-17 16:17:39.753344] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10b2680 00:06:05.857 [2024-04-17 16:17:39.753354] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:05.857 [2024-04-17 16:17:39.755100] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:05.857 [2024-04-17 16:17:39.755142] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:05.857 Passthru0 00:06:05.857 16:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:05.857 16:17:39 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:05.857 16:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:05.857 16:17:39 -- common/autotest_common.sh@10 -- # set +x 00:06:05.857 16:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:05.857 16:17:39 -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:05.857 { 00:06:05.857 "aliases": [ 00:06:05.857 "d0e89745-2eca-4b63-8635-79f948eb5f21" 00:06:05.857 ], 00:06:05.857 "assigned_rate_limits": { 00:06:05.857 "r_mbytes_per_sec": 0, 00:06:05.857 "rw_ios_per_sec": 0, 00:06:05.857 "rw_mbytes_per_sec": 0, 00:06:05.857 "w_mbytes_per_sec": 0 00:06:05.857 }, 00:06:05.857 "block_size": 512, 00:06:05.857 "claim_type": "exclusive_write", 00:06:05.857 "claimed": true, 00:06:05.857 "driver_specific": {}, 00:06:05.857 "memory_domains": [ 00:06:05.857 { 00:06:05.857 "dma_device_id": "system", 00:06:05.857 "dma_device_type": 1 00:06:05.857 }, 00:06:05.857 { 00:06:05.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.857 "dma_device_type": 2 00:06:05.857 } 00:06:05.857 ], 00:06:05.857 "name": "Malloc3", 00:06:05.857 "num_blocks": 16384, 00:06:05.857 "product_name": "Malloc disk", 00:06:05.857 "supported_io_types": { 00:06:05.857 "abort": true, 00:06:05.857 "compare": false, 00:06:05.857 "compare_and_write": false, 00:06:05.857 "flush": true, 00:06:05.857 "nvme_admin": false, 00:06:05.857 "nvme_io": false, 00:06:05.857 "read": true, 00:06:05.857 "reset": true, 00:06:05.857 "unmap": true, 00:06:05.857 "write": true, 00:06:05.857 "write_zeroes": true 00:06:05.857 }, 00:06:05.857 "uuid": "d0e89745-2eca-4b63-8635-79f948eb5f21", 00:06:05.857 "zoned": false 00:06:05.857 }, 00:06:05.857 { 00:06:05.857 "aliases": [ 00:06:05.857 "649eef0c-a306-523d-94ad-4ead4f3cfa69" 00:06:05.857 ], 00:06:05.857 "assigned_rate_limits": { 00:06:05.857 "r_mbytes_per_sec": 0, 00:06:05.857 "rw_ios_per_sec": 0, 00:06:05.857 "rw_mbytes_per_sec": 0, 00:06:05.857 "w_mbytes_per_sec": 0 00:06:05.857 }, 00:06:05.857 "block_size": 512, 00:06:05.857 "claimed": false, 00:06:05.857 "driver_specific": { 00:06:05.857 "passthru": { 00:06:05.857 "base_bdev_name": "Malloc3", 00:06:05.857 "name": "Passthru0" 00:06:05.857 } 00:06:05.857 }, 00:06:05.857 "memory_domains": [ 00:06:05.857 { 00:06:05.857 "dma_device_id": "system", 00:06:05.857 "dma_device_type": 1 00:06:05.857 }, 00:06:05.857 { 00:06:05.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.857 "dma_device_type": 2 00:06:05.857 } 00:06:05.857 ], 00:06:05.857 "name": "Passthru0", 00:06:05.857 "num_blocks": 16384, 00:06:05.857 "product_name": "passthru", 00:06:05.857 "supported_io_types": { 00:06:05.857 "abort": true, 00:06:05.857 "compare": false, 00:06:05.857 "compare_and_write": false, 00:06:05.857 "flush": true, 00:06:05.857 "nvme_admin": false, 00:06:05.857 "nvme_io": false, 00:06:05.857 "read": true, 00:06:05.857 "reset": true, 00:06:05.857 "unmap": true, 00:06:05.857 "write": true, 00:06:05.857 "write_zeroes": true 00:06:05.857 }, 00:06:05.857 "uuid": "649eef0c-a306-523d-94ad-4ead4f3cfa69", 00:06:05.857 "zoned": false 00:06:05.857 } 00:06:05.857 ]' 00:06:05.857 16:17:39 -- rpc/rpc.sh@21 -- # jq length 00:06:05.857 16:17:39 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:05.857 16:17:39 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:05.857 16:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:05.857 16:17:39 -- common/autotest_common.sh@10 -- # set +x 00:06:05.857 16:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:05.857 16:17:39 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:06:05.857 16:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:05.857 16:17:39 -- common/autotest_common.sh@10 -- # set +x 00:06:05.857 16:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:05.857 16:17:39 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:05.857 16:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:05.857 16:17:39 -- common/autotest_common.sh@10 -- # set +x 00:06:05.857 16:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:05.857 16:17:39 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:05.857 16:17:39 -- rpc/rpc.sh@26 -- # jq length 00:06:06.116 16:17:39 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:06.116 00:06:06.116 real 0m0.321s 00:06:06.116 user 0m0.212s 00:06:06.116 sys 0m0.047s 00:06:06.116 16:17:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:06.116 ************************************ 00:06:06.116 END TEST rpc_daemon_integrity 00:06:06.116 16:17:39 -- common/autotest_common.sh@10 -- # set +x 00:06:06.116 ************************************ 00:06:06.116 16:17:39 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:06.116 16:17:39 -- rpc/rpc.sh@84 -- # killprocess 60364 00:06:06.116 16:17:39 -- common/autotest_common.sh@936 -- # '[' -z 60364 ']' 00:06:06.116 16:17:39 -- common/autotest_common.sh@940 -- # kill -0 60364 00:06:06.116 16:17:39 -- common/autotest_common.sh@941 -- # uname 00:06:06.116 16:17:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:06.116 16:17:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60364 00:06:06.116 16:17:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:06.116 16:17:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:06.116 16:17:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60364' 00:06:06.116 killing process with pid 60364 00:06:06.116 16:17:39 -- common/autotest_common.sh@955 -- # kill 60364 00:06:06.116 16:17:39 -- common/autotest_common.sh@960 -- # wait 60364 00:06:06.375 00:06:06.375 real 0m3.545s 00:06:06.375 user 0m4.729s 00:06:06.375 sys 0m0.930s 00:06:06.375 16:17:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:06.375 ************************************ 00:06:06.375 END TEST rpc 00:06:06.375 ************************************ 00:06:06.375 16:17:40 -- common/autotest_common.sh@10 -- # set +x 00:06:06.634 16:17:40 -- spdk/autotest.sh@166 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:06.634 16:17:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:06.634 16:17:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:06.634 16:17:40 -- common/autotest_common.sh@10 -- # set +x 00:06:06.634 ************************************ 00:06:06.634 START TEST rpc_client 00:06:06.634 ************************************ 00:06:06.634 16:17:40 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:06.634 * Looking for test storage... 00:06:06.634 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:06.634 16:17:40 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:06.634 OK 00:06:06.634 16:17:40 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:06.634 00:06:06.634 real 0m0.113s 00:06:06.634 user 0m0.050s 00:06:06.634 sys 0m0.067s 00:06:06.634 16:17:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:06.634 16:17:40 -- common/autotest_common.sh@10 -- # set +x 00:06:06.634 ************************************ 00:06:06.634 END TEST rpc_client 00:06:06.634 ************************************ 00:06:06.893 16:17:40 -- spdk/autotest.sh@167 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:06.893 16:17:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:06.893 16:17:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:06.893 16:17:40 -- common/autotest_common.sh@10 -- # set +x 00:06:06.893 ************************************ 00:06:06.893 START TEST json_config 00:06:06.893 ************************************ 00:06:06.893 16:17:40 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:06.893 16:17:40 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:06.893 16:17:40 -- nvmf/common.sh@7 -- # uname -s 00:06:06.893 16:17:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:06.893 16:17:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:06.893 16:17:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:06.893 16:17:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:06.893 16:17:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:06.893 16:17:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:06.893 16:17:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:06.893 16:17:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:06.893 16:17:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:06.893 16:17:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:06.893 16:17:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:06:06.893 16:17:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:06:06.893 16:17:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:06.893 16:17:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:06.893 16:17:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:06.893 16:17:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:06.893 16:17:40 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:06.893 16:17:40 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:06.893 16:17:40 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:06.893 16:17:40 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:06.893 16:17:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.893 16:17:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.893 16:17:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.893 16:17:40 -- paths/export.sh@5 -- # export PATH 00:06:06.893 16:17:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.893 16:17:40 -- nvmf/common.sh@47 -- # : 0 00:06:06.893 16:17:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:06.893 16:17:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:06.893 16:17:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:06.893 16:17:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:06.893 16:17:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:06.893 16:17:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:06.893 16:17:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:06.893 16:17:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:06.893 16:17:40 -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:06.893 16:17:40 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:06.893 16:17:40 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:06.893 16:17:40 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:06.893 16:17:40 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:06.893 16:17:40 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:06.893 16:17:40 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:06.893 16:17:40 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:06.893 16:17:40 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:06.893 16:17:40 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:06.893 16:17:40 -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:06.893 16:17:40 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:06:06.893 16:17:40 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:06.893 16:17:40 -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:06.893 16:17:40 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:06.893 INFO: JSON configuration test init 00:06:06.893 16:17:40 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:06:06.893 16:17:40 -- json_config/json_config.sh@357 -- # json_config_test_init 00:06:06.893 16:17:40 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:06:06.893 16:17:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:06.893 16:17:40 -- common/autotest_common.sh@10 -- # set +x 00:06:06.894 16:17:40 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:06:06.894 16:17:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:06.894 16:17:40 -- common/autotest_common.sh@10 -- # set +x 00:06:06.894 16:17:40 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:06:06.894 16:17:40 -- json_config/common.sh@9 -- # local app=target 00:06:06.894 16:17:40 -- json_config/common.sh@10 -- # shift 00:06:06.894 16:17:40 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:06.894 16:17:40 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:06.894 16:17:40 -- json_config/common.sh@15 -- # local app_extra_params= 00:06:06.894 16:17:40 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:06.894 16:17:40 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:06.894 16:17:40 -- json_config/common.sh@22 -- # app_pid["$app"]=60699 00:06:06.894 Waiting for target to run... 00:06:06.894 16:17:40 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:06.894 16:17:40 -- json_config/common.sh@25 -- # waitforlisten 60699 /var/tmp/spdk_tgt.sock 00:06:06.894 16:17:40 -- common/autotest_common.sh@817 -- # '[' -z 60699 ']' 00:06:06.894 16:17:40 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:06.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:06.894 16:17:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:06.894 16:17:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:06.894 16:17:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:06.894 16:17:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:06.894 16:17:40 -- common/autotest_common.sh@10 -- # set +x 00:06:06.894 [2024-04-17 16:17:40.920745] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:06:06.894 [2024-04-17 16:17:40.920910] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60699 ] 00:06:07.462 [2024-04-17 16:17:41.360460] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.462 [2024-04-17 16:17:41.465849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.029 16:17:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:08.029 00:06:08.029 16:17:41 -- common/autotest_common.sh@850 -- # return 0 00:06:08.029 16:17:41 -- json_config/common.sh@26 -- # echo '' 00:06:08.029 16:17:41 -- json_config/json_config.sh@269 -- # create_accel_config 00:06:08.029 16:17:41 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:06:08.029 16:17:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:08.029 16:17:41 -- common/autotest_common.sh@10 -- # set +x 00:06:08.029 16:17:41 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:06:08.029 16:17:41 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:06:08.029 16:17:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:08.029 16:17:41 -- common/autotest_common.sh@10 -- # set +x 00:06:08.029 16:17:41 -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:08.029 16:17:41 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:06:08.029 16:17:41 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:08.597 16:17:42 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:06:08.597 16:17:42 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:08.597 16:17:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:08.597 16:17:42 -- common/autotest_common.sh@10 -- # set +x 00:06:08.597 16:17:42 -- json_config/json_config.sh@45 -- # local ret=0 00:06:08.597 16:17:42 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:08.597 16:17:42 -- json_config/json_config.sh@46 -- # local enabled_types 00:06:08.597 16:17:42 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:08.597 16:17:42 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:08.597 16:17:42 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:08.856 16:17:42 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:08.856 16:17:42 -- json_config/json_config.sh@48 -- # local get_types 00:06:08.856 16:17:42 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:08.856 16:17:42 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:06:08.856 16:17:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:08.856 16:17:42 -- common/autotest_common.sh@10 -- # set +x 00:06:08.856 16:17:42 -- json_config/json_config.sh@55 -- # return 0 00:06:08.856 16:17:42 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:06:08.856 16:17:42 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:08.856 16:17:42 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:08.856 16:17:42 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:06:08.856 16:17:42 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:06:08.856 16:17:42 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:06:08.856 16:17:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:08.856 16:17:42 -- common/autotest_common.sh@10 -- # set +x 00:06:08.856 16:17:42 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:08.856 16:17:42 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:06:08.856 16:17:42 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:06:08.856 16:17:42 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:08.856 16:17:42 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:09.163 MallocForNvmf0 00:06:09.163 16:17:43 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:09.163 16:17:43 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:09.445 MallocForNvmf1 00:06:09.445 16:17:43 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:09.445 16:17:43 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:09.704 [2024-04-17 16:17:43.675961] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:09.704 16:17:43 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:09.704 16:17:43 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:09.963 16:17:43 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:09.963 16:17:43 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:10.222 16:17:44 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:10.222 16:17:44 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:10.480 16:17:44 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:10.480 16:17:44 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:10.739 [2024-04-17 16:17:44.776982] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:10.998 16:17:44 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:06:10.998 16:17:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:10.998 16:17:44 -- common/autotest_common.sh@10 -- # set +x 00:06:10.998 16:17:44 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:10.998 16:17:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:10.998 16:17:44 -- common/autotest_common.sh@10 -- # set +x 00:06:10.998 16:17:44 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:10.998 16:17:44 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:10.998 16:17:44 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:11.257 MallocBdevForConfigChangeCheck 00:06:11.257 16:17:45 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:11.257 16:17:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:11.257 16:17:45 -- common/autotest_common.sh@10 -- # set +x 00:06:11.257 16:17:45 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:11.257 16:17:45 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:11.824 INFO: shutting down applications... 00:06:11.824 16:17:45 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:11.824 16:17:45 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:11.824 16:17:45 -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:11.824 16:17:45 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:11.824 16:17:45 -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:12.084 Calling clear_iscsi_subsystem 00:06:12.084 Calling clear_nvmf_subsystem 00:06:12.084 Calling clear_nbd_subsystem 00:06:12.084 Calling clear_ublk_subsystem 00:06:12.084 Calling clear_vhost_blk_subsystem 00:06:12.084 Calling clear_vhost_scsi_subsystem 00:06:12.084 Calling clear_bdev_subsystem 00:06:12.084 16:17:45 -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:12.084 16:17:45 -- json_config/json_config.sh@343 -- # count=100 00:06:12.084 16:17:45 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:12.084 16:17:45 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:12.084 16:17:45 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:12.084 16:17:45 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:06:12.650 16:17:46 -- json_config/json_config.sh@345 -- # break 00:06:12.650 16:17:46 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:12.650 16:17:46 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:12.650 16:17:46 -- json_config/common.sh@31 -- # local app=target 00:06:12.650 16:17:46 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:12.650 16:17:46 -- json_config/common.sh@35 -- # [[ -n 60699 ]] 00:06:12.650 16:17:46 -- json_config/common.sh@38 -- # kill -SIGINT 60699 00:06:12.650 16:17:46 -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:12.650 16:17:46 -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:12.650 16:17:46 -- json_config/common.sh@41 -- # kill -0 60699 00:06:12.650 16:17:46 -- json_config/common.sh@45 -- # sleep 0.5 00:06:12.908 16:17:46 -- json_config/common.sh@40 -- # (( i++ )) 00:06:12.908 16:17:46 -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:12.908 16:17:46 -- json_config/common.sh@41 -- # kill -0 60699 00:06:12.908 16:17:46 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:12.908 16:17:46 -- json_config/common.sh@43 -- # break 00:06:12.908 16:17:46 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:12.908 16:17:46 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:12.908 SPDK target shutdown done 00:06:12.908 INFO: relaunching applications... 00:06:12.908 16:17:46 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:12.908 16:17:46 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:12.908 16:17:46 -- json_config/common.sh@9 -- # local app=target 00:06:12.908 16:17:46 -- json_config/common.sh@10 -- # shift 00:06:12.908 16:17:46 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:12.908 16:17:46 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:12.908 16:17:46 -- json_config/common.sh@15 -- # local app_extra_params= 00:06:12.908 16:17:46 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:12.908 16:17:46 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:12.908 16:17:46 -- json_config/common.sh@22 -- # app_pid["$app"]=60979 00:06:12.908 16:17:46 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:12.908 Waiting for target to run... 00:06:12.908 16:17:46 -- json_config/common.sh@25 -- # waitforlisten 60979 /var/tmp/spdk_tgt.sock 00:06:12.908 16:17:46 -- common/autotest_common.sh@817 -- # '[' -z 60979 ']' 00:06:12.908 16:17:46 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:12.908 16:17:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:12.908 16:17:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:12.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:12.908 16:17:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:12.908 16:17:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:12.908 16:17:46 -- common/autotest_common.sh@10 -- # set +x 00:06:13.167 [2024-04-17 16:17:46.968558] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:06:13.167 [2024-04-17 16:17:46.968701] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60979 ] 00:06:13.734 [2024-04-17 16:17:47.526732] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.734 [2024-04-17 16:17:47.632691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.995 [2024-04-17 16:17:47.954765] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:13.995 [2024-04-17 16:17:47.986884] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:13.995 16:17:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:13.995 16:17:48 -- common/autotest_common.sh@850 -- # return 0 00:06:13.995 00:06:13.995 INFO: Checking if target configuration is the same... 00:06:13.995 16:17:48 -- json_config/common.sh@26 -- # echo '' 00:06:13.995 16:17:48 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:13.995 16:17:48 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:13.996 16:17:48 -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:13.996 16:17:48 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:13.996 16:17:48 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:13.996 + '[' 2 -ne 2 ']' 00:06:13.996 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:13.996 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:13.996 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:13.996 +++ basename /dev/fd/62 00:06:14.256 ++ mktemp /tmp/62.XXX 00:06:14.256 + tmp_file_1=/tmp/62.krC 00:06:14.256 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:14.256 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:14.256 + tmp_file_2=/tmp/spdk_tgt_config.json.70P 00:06:14.256 + ret=0 00:06:14.256 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:14.513 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:14.513 + diff -u /tmp/62.krC /tmp/spdk_tgt_config.json.70P 00:06:14.513 + echo 'INFO: JSON config files are the same' 00:06:14.513 INFO: JSON config files are the same 00:06:14.513 + rm /tmp/62.krC /tmp/spdk_tgt_config.json.70P 00:06:14.513 + exit 0 00:06:14.513 16:17:48 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:14.513 INFO: changing configuration and checking if this can be detected... 00:06:14.513 16:17:48 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:14.513 16:17:48 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:14.513 16:17:48 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:15.080 16:17:48 -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:15.080 16:17:48 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:15.080 16:17:48 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:15.080 + '[' 2 -ne 2 ']' 00:06:15.080 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:15.080 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:15.080 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:15.080 +++ basename /dev/fd/62 00:06:15.080 ++ mktemp /tmp/62.XXX 00:06:15.080 + tmp_file_1=/tmp/62.2lp 00:06:15.080 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:15.080 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:15.080 + tmp_file_2=/tmp/spdk_tgt_config.json.IXa 00:06:15.080 + ret=0 00:06:15.080 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:15.339 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:15.339 + diff -u /tmp/62.2lp /tmp/spdk_tgt_config.json.IXa 00:06:15.339 + ret=1 00:06:15.339 + echo '=== Start of file: /tmp/62.2lp ===' 00:06:15.339 + cat /tmp/62.2lp 00:06:15.339 + echo '=== End of file: /tmp/62.2lp ===' 00:06:15.339 + echo '' 00:06:15.339 + echo '=== Start of file: /tmp/spdk_tgt_config.json.IXa ===' 00:06:15.339 + cat /tmp/spdk_tgt_config.json.IXa 00:06:15.339 + echo '=== End of file: /tmp/spdk_tgt_config.json.IXa ===' 00:06:15.339 + echo '' 00:06:15.339 + rm /tmp/62.2lp /tmp/spdk_tgt_config.json.IXa 00:06:15.339 + exit 1 00:06:15.339 INFO: configuration change detected. 00:06:15.339 16:17:49 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:15.339 16:17:49 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:15.339 16:17:49 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:15.339 16:17:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:15.339 16:17:49 -- common/autotest_common.sh@10 -- # set +x 00:06:15.339 16:17:49 -- json_config/json_config.sh@307 -- # local ret=0 00:06:15.339 16:17:49 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:15.339 16:17:49 -- json_config/json_config.sh@317 -- # [[ -n 60979 ]] 00:06:15.339 16:17:49 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:15.339 16:17:49 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:15.339 16:17:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:15.339 16:17:49 -- common/autotest_common.sh@10 -- # set +x 00:06:15.339 16:17:49 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:15.339 16:17:49 -- json_config/json_config.sh@193 -- # uname -s 00:06:15.339 16:17:49 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:15.339 16:17:49 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:15.339 16:17:49 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:15.339 16:17:49 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:15.339 16:17:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:15.339 16:17:49 -- common/autotest_common.sh@10 -- # set +x 00:06:15.598 16:17:49 -- json_config/json_config.sh@323 -- # killprocess 60979 00:06:15.598 16:17:49 -- common/autotest_common.sh@936 -- # '[' -z 60979 ']' 00:06:15.598 16:17:49 -- common/autotest_common.sh@940 -- # kill -0 60979 00:06:15.598 16:17:49 -- common/autotest_common.sh@941 -- # uname 00:06:15.598 16:17:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:15.598 16:17:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60979 00:06:15.598 16:17:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:15.598 16:17:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:15.598 killing process with pid 60979 00:06:15.598 16:17:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60979' 00:06:15.598 16:17:49 -- common/autotest_common.sh@955 -- # kill 60979 00:06:15.598 16:17:49 -- common/autotest_common.sh@960 -- # wait 60979 00:06:15.857 16:17:49 -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:15.857 16:17:49 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:15.857 16:17:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:15.857 16:17:49 -- common/autotest_common.sh@10 -- # set +x 00:06:15.857 16:17:49 -- json_config/json_config.sh@328 -- # return 0 00:06:15.857 INFO: Success 00:06:15.857 16:17:49 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:15.857 00:06:15.857 real 0m8.991s 00:06:15.857 user 0m12.894s 00:06:15.857 sys 0m2.084s 00:06:15.857 16:17:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:15.857 16:17:49 -- common/autotest_common.sh@10 -- # set +x 00:06:15.857 ************************************ 00:06:15.857 END TEST json_config 00:06:15.857 ************************************ 00:06:15.857 16:17:49 -- spdk/autotest.sh@168 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:15.857 16:17:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:15.857 16:17:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.857 16:17:49 -- common/autotest_common.sh@10 -- # set +x 00:06:15.857 ************************************ 00:06:15.857 START TEST json_config_extra_key 00:06:15.857 ************************************ 00:06:15.857 16:17:49 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:16.115 16:17:49 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:16.115 16:17:49 -- nvmf/common.sh@7 -- # uname -s 00:06:16.115 16:17:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:16.115 16:17:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:16.115 16:17:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:16.115 16:17:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:16.115 16:17:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:16.115 16:17:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:16.115 16:17:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:16.115 16:17:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:16.116 16:17:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:16.116 16:17:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:16.116 16:17:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:06:16.116 16:17:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:06:16.116 16:17:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:16.116 16:17:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:16.116 16:17:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:16.116 16:17:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:16.116 16:17:49 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:16.116 16:17:49 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:16.116 16:17:49 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:16.116 16:17:49 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:16.116 16:17:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.116 16:17:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.116 16:17:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.116 16:17:49 -- paths/export.sh@5 -- # export PATH 00:06:16.116 16:17:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.116 16:17:49 -- nvmf/common.sh@47 -- # : 0 00:06:16.116 16:17:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:16.116 16:17:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:16.116 16:17:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:16.116 16:17:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:16.116 16:17:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:16.116 16:17:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:16.116 16:17:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:16.116 16:17:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:16.116 16:17:49 -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:16.116 16:17:49 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:16.116 16:17:49 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:16.116 16:17:49 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:16.116 16:17:49 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:16.116 16:17:49 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:16.116 16:17:49 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:16.116 16:17:49 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:16.116 16:17:49 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:16.116 16:17:49 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:16.116 INFO: launching applications... 00:06:16.116 16:17:49 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:16.116 16:17:49 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:16.116 16:17:49 -- json_config/common.sh@9 -- # local app=target 00:06:16.116 16:17:49 -- json_config/common.sh@10 -- # shift 00:06:16.116 16:17:49 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:16.116 16:17:49 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:16.116 16:17:49 -- json_config/common.sh@15 -- # local app_extra_params= 00:06:16.116 16:17:49 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:16.116 16:17:49 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:16.116 16:17:49 -- json_config/common.sh@22 -- # app_pid["$app"]=61160 00:06:16.116 Waiting for target to run... 00:06:16.116 16:17:49 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:16.116 16:17:49 -- json_config/common.sh@25 -- # waitforlisten 61160 /var/tmp/spdk_tgt.sock 00:06:16.116 16:17:49 -- common/autotest_common.sh@817 -- # '[' -z 61160 ']' 00:06:16.116 16:17:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:16.116 16:17:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:16.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:16.116 16:17:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:16.116 16:17:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:16.116 16:17:49 -- common/autotest_common.sh@10 -- # set +x 00:06:16.116 16:17:49 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:16.116 [2024-04-17 16:17:50.028753] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:06:16.116 [2024-04-17 16:17:50.028912] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61160 ] 00:06:16.699 [2024-04-17 16:17:50.468111] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.699 [2024-04-17 16:17:50.589712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.267 00:06:17.267 INFO: shutting down applications... 00:06:17.267 16:17:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:17.267 16:17:51 -- common/autotest_common.sh@850 -- # return 0 00:06:17.267 16:17:51 -- json_config/common.sh@26 -- # echo '' 00:06:17.267 16:17:51 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:17.267 16:17:51 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:17.267 16:17:51 -- json_config/common.sh@31 -- # local app=target 00:06:17.267 16:17:51 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:17.267 16:17:51 -- json_config/common.sh@35 -- # [[ -n 61160 ]] 00:06:17.267 16:17:51 -- json_config/common.sh@38 -- # kill -SIGINT 61160 00:06:17.267 16:17:51 -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:17.267 16:17:51 -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:17.267 16:17:51 -- json_config/common.sh@41 -- # kill -0 61160 00:06:17.267 16:17:51 -- json_config/common.sh@45 -- # sleep 0.5 00:06:17.525 16:17:51 -- json_config/common.sh@40 -- # (( i++ )) 00:06:17.525 16:17:51 -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:17.525 16:17:51 -- json_config/common.sh@41 -- # kill -0 61160 00:06:17.525 16:17:51 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:17.525 16:17:51 -- json_config/common.sh@43 -- # break 00:06:17.525 16:17:51 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:17.525 16:17:51 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:17.525 SPDK target shutdown done 00:06:17.525 16:17:51 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:17.525 Success 00:06:17.525 00:06:17.525 real 0m1.690s 00:06:17.525 user 0m1.650s 00:06:17.525 sys 0m0.457s 00:06:17.525 16:17:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:17.525 16:17:51 -- common/autotest_common.sh@10 -- # set +x 00:06:17.525 ************************************ 00:06:17.525 END TEST json_config_extra_key 00:06:17.525 ************************************ 00:06:17.784 16:17:51 -- spdk/autotest.sh@169 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:17.784 16:17:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:17.784 16:17:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.784 16:17:51 -- common/autotest_common.sh@10 -- # set +x 00:06:17.784 ************************************ 00:06:17.784 START TEST alias_rpc 00:06:17.784 ************************************ 00:06:17.784 16:17:51 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:17.784 * Looking for test storage... 00:06:17.784 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:17.784 16:17:51 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:17.784 16:17:51 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=61248 00:06:17.784 16:17:51 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:17.784 16:17:51 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 61248 00:06:17.784 16:17:51 -- common/autotest_common.sh@817 -- # '[' -z 61248 ']' 00:06:17.784 16:17:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.784 16:17:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:17.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.784 16:17:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.784 16:17:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:17.784 16:17:51 -- common/autotest_common.sh@10 -- # set +x 00:06:18.041 [2024-04-17 16:17:51.851486] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:06:18.041 [2024-04-17 16:17:51.851648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61248 ] 00:06:18.041 [2024-04-17 16:17:51.992818] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.299 [2024-04-17 16:17:52.139758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.864 16:17:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:18.864 16:17:52 -- common/autotest_common.sh@850 -- # return 0 00:06:18.864 16:17:52 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:19.122 16:17:53 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 61248 00:06:19.122 16:17:53 -- common/autotest_common.sh@936 -- # '[' -z 61248 ']' 00:06:19.122 16:17:53 -- common/autotest_common.sh@940 -- # kill -0 61248 00:06:19.380 16:17:53 -- common/autotest_common.sh@941 -- # uname 00:06:19.380 16:17:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:19.380 16:17:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61248 00:06:19.380 16:17:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:19.380 killing process with pid 61248 00:06:19.380 16:17:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:19.380 16:17:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61248' 00:06:19.380 16:17:53 -- common/autotest_common.sh@955 -- # kill 61248 00:06:19.380 16:17:53 -- common/autotest_common.sh@960 -- # wait 61248 00:06:19.639 00:06:19.639 real 0m1.950s 00:06:19.639 user 0m2.261s 00:06:19.639 sys 0m0.460s 00:06:19.639 ************************************ 00:06:19.639 END TEST alias_rpc 00:06:19.639 ************************************ 00:06:19.639 16:17:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:19.639 16:17:53 -- common/autotest_common.sh@10 -- # set +x 00:06:19.639 16:17:53 -- spdk/autotest.sh@171 -- # [[ 1 -eq 0 ]] 00:06:19.639 16:17:53 -- spdk/autotest.sh@175 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:19.639 16:17:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:19.639 16:17:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.639 16:17:53 -- common/autotest_common.sh@10 -- # set +x 00:06:19.898 ************************************ 00:06:19.898 START TEST dpdk_mem_utility 00:06:19.898 ************************************ 00:06:19.898 16:17:53 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:19.898 * Looking for test storage... 00:06:19.898 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:19.898 16:17:53 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:19.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.898 16:17:53 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=61345 00:06:19.898 16:17:53 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 61345 00:06:19.898 16:17:53 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:19.898 16:17:53 -- common/autotest_common.sh@817 -- # '[' -z 61345 ']' 00:06:19.898 16:17:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.898 16:17:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:19.898 16:17:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.898 16:17:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:19.898 16:17:53 -- common/autotest_common.sh@10 -- # set +x 00:06:19.898 [2024-04-17 16:17:53.883379] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:06:19.898 [2024-04-17 16:17:53.883478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61345 ] 00:06:20.157 [2024-04-17 16:17:54.014340] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.157 [2024-04-17 16:17:54.137025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.095 16:17:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:21.095 16:17:54 -- common/autotest_common.sh@850 -- # return 0 00:06:21.095 16:17:54 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:21.095 16:17:54 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:21.095 16:17:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:21.095 16:17:54 -- common/autotest_common.sh@10 -- # set +x 00:06:21.095 { 00:06:21.095 "filename": "/tmp/spdk_mem_dump.txt" 00:06:21.095 } 00:06:21.095 16:17:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:21.095 16:17:54 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:21.095 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:21.095 1 heaps totaling size 814.000000 MiB 00:06:21.095 size: 814.000000 MiB heap id: 0 00:06:21.095 end heaps---------- 00:06:21.095 8 mempools totaling size 598.116089 MiB 00:06:21.095 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:21.095 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:21.095 size: 84.521057 MiB name: bdev_io_61345 00:06:21.095 size: 51.011292 MiB name: evtpool_61345 00:06:21.095 size: 50.003479 MiB name: msgpool_61345 00:06:21.095 size: 21.763794 MiB name: PDU_Pool 00:06:21.095 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:21.095 size: 0.026123 MiB name: Session_Pool 00:06:21.095 end mempools------- 00:06:21.095 6 memzones totaling size 4.142822 MiB 00:06:21.095 size: 1.000366 MiB name: RG_ring_0_61345 00:06:21.095 size: 1.000366 MiB name: RG_ring_1_61345 00:06:21.095 size: 1.000366 MiB name: RG_ring_4_61345 00:06:21.095 size: 1.000366 MiB name: RG_ring_5_61345 00:06:21.095 size: 0.125366 MiB name: RG_ring_2_61345 00:06:21.095 size: 0.015991 MiB name: RG_ring_3_61345 00:06:21.095 end memzones------- 00:06:21.095 16:17:54 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:21.095 heap id: 0 total size: 814.000000 MiB number of busy elements: 224 number of free elements: 15 00:06:21.095 list of free elements. size: 12.485840 MiB 00:06:21.095 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:21.095 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:21.095 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:21.095 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:21.095 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:21.095 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:21.095 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:21.095 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:21.095 element at address: 0x200000200000 with size: 0.837219 MiB 00:06:21.095 element at address: 0x20001aa00000 with size: 0.571899 MiB 00:06:21.095 element at address: 0x20000b200000 with size: 0.489441 MiB 00:06:21.095 element at address: 0x200000800000 with size: 0.486877 MiB 00:06:21.095 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:21.095 element at address: 0x200027e00000 with size: 0.398132 MiB 00:06:21.095 element at address: 0x200003a00000 with size: 0.351501 MiB 00:06:21.095 list of standard malloc elements. size: 199.251587 MiB 00:06:21.095 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:21.095 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:21.095 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:21.095 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:21.095 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:21.095 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:21.095 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:21.095 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:21.095 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:21.095 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:21.095 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:21.095 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:21.095 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:06:21.095 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:06:21.095 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:06:21.095 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:06:21.095 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:06:21.095 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:06:21.095 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:06:21.095 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:06:21.095 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:06:21.095 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:06:21.095 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:06:21.095 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:06:21.095 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:21.095 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:21.095 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:21.095 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:21.095 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:21.095 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:21.095 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:21.095 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:21.095 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:21.095 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:21.095 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:21.095 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:21.095 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:21.095 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:21.095 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:21.095 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:21.095 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:06:21.095 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:06:21.095 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:06:21.095 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:06:21.095 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:06:21.095 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:21.095 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:21.095 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:21.095 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:06:21.095 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:06:21.095 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:06:21.095 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:06:21.095 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:06:21.095 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:06:21.095 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:06:21.095 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:06:21.095 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:06:21.095 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:06:21.095 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:06:21.095 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:06:21.095 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:06:21.095 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:06:21.095 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:06:21.095 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:06:21.095 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:06:21.095 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:06:21.095 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:06:21.095 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:06:21.095 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:06:21.095 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:06:21.095 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:21.095 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:21.095 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:21.095 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:21.095 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:21.095 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:21.095 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:21.095 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:21.095 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:06:21.095 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:06:21.095 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:06:21.095 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:06:21.095 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:06:21.095 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:06:21.095 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:06:21.095 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:21.096 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:21.096 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:21.096 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:21.096 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:21.096 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e65ec0 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e65f80 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6cb80 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:06:21.096 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:06:21.097 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:06:21.097 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:06:21.097 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:06:21.097 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:06:21.097 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:21.097 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:21.097 list of memzone associated elements. size: 602.262573 MiB 00:06:21.097 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:21.097 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:21.097 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:21.097 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:21.097 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:21.097 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_61345_0 00:06:21.097 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:21.097 associated memzone info: size: 48.002930 MiB name: MP_evtpool_61345_0 00:06:21.097 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:21.097 associated memzone info: size: 48.002930 MiB name: MP_msgpool_61345_0 00:06:21.097 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:21.097 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:21.097 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:21.097 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:21.097 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:21.097 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_61345 00:06:21.097 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:21.097 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_61345 00:06:21.097 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:21.097 associated memzone info: size: 1.007996 MiB name: MP_evtpool_61345 00:06:21.097 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:21.097 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:21.097 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:21.097 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:21.097 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:21.097 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:21.097 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:21.097 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:21.097 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:21.097 associated memzone info: size: 1.000366 MiB name: RG_ring_0_61345 00:06:21.097 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:21.097 associated memzone info: size: 1.000366 MiB name: RG_ring_1_61345 00:06:21.097 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:21.097 associated memzone info: size: 1.000366 MiB name: RG_ring_4_61345 00:06:21.097 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:21.097 associated memzone info: size: 1.000366 MiB name: RG_ring_5_61345 00:06:21.097 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:21.097 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_61345 00:06:21.097 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:21.097 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:21.097 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:21.097 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:21.097 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:21.097 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:21.097 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:21.097 associated memzone info: size: 0.125366 MiB name: RG_ring_2_61345 00:06:21.097 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:21.097 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:21.097 element at address: 0x200027e66040 with size: 0.023743 MiB 00:06:21.097 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:21.097 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:21.097 associated memzone info: size: 0.015991 MiB name: RG_ring_3_61345 00:06:21.097 element at address: 0x200027e6c180 with size: 0.002441 MiB 00:06:21.097 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:21.097 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:06:21.097 associated memzone info: size: 0.000183 MiB name: MP_msgpool_61345 00:06:21.097 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:21.097 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_61345 00:06:21.097 element at address: 0x200027e6cc40 with size: 0.000305 MiB 00:06:21.097 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:21.097 16:17:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:21.097 16:17:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 61345 00:06:21.097 16:17:55 -- common/autotest_common.sh@936 -- # '[' -z 61345 ']' 00:06:21.097 16:17:55 -- common/autotest_common.sh@940 -- # kill -0 61345 00:06:21.097 16:17:55 -- common/autotest_common.sh@941 -- # uname 00:06:21.097 16:17:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:21.097 16:17:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61345 00:06:21.097 killing process with pid 61345 00:06:21.097 16:17:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:21.097 16:17:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:21.097 16:17:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61345' 00:06:21.097 16:17:55 -- common/autotest_common.sh@955 -- # kill 61345 00:06:21.097 16:17:55 -- common/autotest_common.sh@960 -- # wait 61345 00:06:21.664 ************************************ 00:06:21.664 END TEST dpdk_mem_utility 00:06:21.664 ************************************ 00:06:21.664 00:06:21.664 real 0m1.750s 00:06:21.664 user 0m1.924s 00:06:21.664 sys 0m0.433s 00:06:21.664 16:17:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:21.664 16:17:55 -- common/autotest_common.sh@10 -- # set +x 00:06:21.664 16:17:55 -- spdk/autotest.sh@176 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:21.664 16:17:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:21.664 16:17:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:21.664 16:17:55 -- common/autotest_common.sh@10 -- # set +x 00:06:21.664 ************************************ 00:06:21.664 START TEST event 00:06:21.664 ************************************ 00:06:21.664 16:17:55 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:21.664 * Looking for test storage... 00:06:21.664 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:21.664 16:17:55 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:21.664 16:17:55 -- bdev/nbd_common.sh@6 -- # set -e 00:06:21.664 16:17:55 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:21.664 16:17:55 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:21.664 16:17:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:21.665 16:17:55 -- common/autotest_common.sh@10 -- # set +x 00:06:21.922 ************************************ 00:06:21.922 START TEST event_perf 00:06:21.922 ************************************ 00:06:21.922 16:17:55 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:21.922 Running I/O for 1 seconds...[2024-04-17 16:17:55.768192] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:06:21.922 [2024-04-17 16:17:55.768440] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61449 ] 00:06:21.922 [2024-04-17 16:17:55.912724] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:22.180 [2024-04-17 16:17:56.036031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.180 [2024-04-17 16:17:56.036155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.180 Running I/O for 1 seconds...[2024-04-17 16:17:56.036260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:22.180 [2024-04-17 16:17:56.036264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.129 00:06:23.129 lcore 0: 194079 00:06:23.129 lcore 1: 194078 00:06:23.129 lcore 2: 194079 00:06:23.129 lcore 3: 194079 00:06:23.129 done. 00:06:23.129 00:06:23.129 real 0m1.406s 00:06:23.129 user 0m4.209s 00:06:23.129 sys 0m0.068s 00:06:23.129 16:17:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:23.129 16:17:57 -- common/autotest_common.sh@10 -- # set +x 00:06:23.129 ************************************ 00:06:23.129 END TEST event_perf 00:06:23.129 ************************************ 00:06:23.387 16:17:57 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:23.387 16:17:57 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:23.387 16:17:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:23.387 16:17:57 -- common/autotest_common.sh@10 -- # set +x 00:06:23.387 ************************************ 00:06:23.387 START TEST event_reactor 00:06:23.387 ************************************ 00:06:23.387 16:17:57 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:23.387 [2024-04-17 16:17:57.293478] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:06:23.387 [2024-04-17 16:17:57.293704] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61486 ] 00:06:23.387 [2024-04-17 16:17:57.427090] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.646 [2024-04-17 16:17:57.556039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.022 test_start 00:06:25.022 oneshot 00:06:25.022 tick 100 00:06:25.022 tick 100 00:06:25.022 tick 250 00:06:25.022 tick 100 00:06:25.022 tick 100 00:06:25.022 tick 250 00:06:25.022 tick 500 00:06:25.022 tick 100 00:06:25.022 tick 100 00:06:25.022 tick 100 00:06:25.022 tick 250 00:06:25.022 tick 100 00:06:25.022 tick 100 00:06:25.022 test_end 00:06:25.022 00:06:25.022 real 0m1.394s 00:06:25.022 user 0m1.229s 00:06:25.022 sys 0m0.057s 00:06:25.022 16:17:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:25.022 16:17:58 -- common/autotest_common.sh@10 -- # set +x 00:06:25.022 ************************************ 00:06:25.022 END TEST event_reactor 00:06:25.022 ************************************ 00:06:25.022 16:17:58 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:25.022 16:17:58 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:25.022 16:17:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.022 16:17:58 -- common/autotest_common.sh@10 -- # set +x 00:06:25.022 ************************************ 00:06:25.022 START TEST event_reactor_perf 00:06:25.022 ************************************ 00:06:25.022 16:17:58 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:25.022 [2024-04-17 16:17:58.806413] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:06:25.022 [2024-04-17 16:17:58.806517] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61531 ] 00:06:25.022 [2024-04-17 16:17:58.944022] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.281 [2024-04-17 16:17:59.072824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.216 test_start 00:06:26.216 test_end 00:06:26.216 Performance: 365040 events per second 00:06:26.216 ************************************ 00:06:26.216 END TEST event_reactor_perf 00:06:26.216 ************************************ 00:06:26.216 00:06:26.216 real 0m1.401s 00:06:26.216 user 0m1.225s 00:06:26.216 sys 0m0.067s 00:06:26.216 16:18:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:26.216 16:18:00 -- common/autotest_common.sh@10 -- # set +x 00:06:26.216 16:18:00 -- event/event.sh@49 -- # uname -s 00:06:26.216 16:18:00 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:26.216 16:18:00 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:26.216 16:18:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:26.216 16:18:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.216 16:18:00 -- common/autotest_common.sh@10 -- # set +x 00:06:26.474 ************************************ 00:06:26.474 START TEST event_scheduler 00:06:26.474 ************************************ 00:06:26.474 16:18:00 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:26.474 * Looking for test storage... 00:06:26.474 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:26.474 16:18:00 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:26.474 16:18:00 -- scheduler/scheduler.sh@35 -- # scheduler_pid=61598 00:06:26.474 16:18:00 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:26.474 16:18:00 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:26.474 16:18:00 -- scheduler/scheduler.sh@37 -- # waitforlisten 61598 00:06:26.474 16:18:00 -- common/autotest_common.sh@817 -- # '[' -z 61598 ']' 00:06:26.474 16:18:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.474 16:18:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:26.474 16:18:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.474 16:18:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:26.474 16:18:00 -- common/autotest_common.sh@10 -- # set +x 00:06:26.474 [2024-04-17 16:18:00.440537] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:06:26.474 [2024-04-17 16:18:00.441406] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61598 ] 00:06:26.732 [2024-04-17 16:18:00.582097] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:26.732 [2024-04-17 16:18:00.714049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.732 [2024-04-17 16:18:00.714127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.732 [2024-04-17 16:18:00.714268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:26.732 [2024-04-17 16:18:00.714274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.681 16:18:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:27.681 16:18:01 -- common/autotest_common.sh@850 -- # return 0 00:06:27.681 16:18:01 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:27.681 16:18:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:27.681 16:18:01 -- common/autotest_common.sh@10 -- # set +x 00:06:27.681 POWER: Env isn't set yet! 00:06:27.681 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:27.681 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:27.681 POWER: Cannot set governor of lcore 0 to userspace 00:06:27.681 POWER: Attempting to initialise PSTAT power management... 00:06:27.681 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:27.681 POWER: Cannot set governor of lcore 0 to performance 00:06:27.681 POWER: Attempting to initialise AMD PSTATE power management... 00:06:27.681 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:27.681 POWER: Cannot set governor of lcore 0 to userspace 00:06:27.681 POWER: Attempting to initialise CPPC power management... 00:06:27.681 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:27.681 POWER: Cannot set governor of lcore 0 to userspace 00:06:27.681 POWER: Attempting to initialise VM power management... 00:06:27.681 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:27.681 POWER: Unable to set Power Management Environment for lcore 0 00:06:27.681 [2024-04-17 16:18:01.409005] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:06:27.681 [2024-04-17 16:18:01.409021] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:06:27.681 [2024-04-17 16:18:01.409030] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:06:27.681 16:18:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:27.681 16:18:01 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:27.681 16:18:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:27.681 16:18:01 -- common/autotest_common.sh@10 -- # set +x 00:06:27.681 [2024-04-17 16:18:01.506919] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:27.681 16:18:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:27.681 16:18:01 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:27.681 16:18:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:27.681 16:18:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.681 16:18:01 -- common/autotest_common.sh@10 -- # set +x 00:06:27.681 ************************************ 00:06:27.681 START TEST scheduler_create_thread 00:06:27.681 ************************************ 00:06:27.681 16:18:01 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:06:27.681 16:18:01 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:27.681 16:18:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:27.681 16:18:01 -- common/autotest_common.sh@10 -- # set +x 00:06:27.681 2 00:06:27.681 16:18:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:27.681 16:18:01 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:27.681 16:18:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:27.681 16:18:01 -- common/autotest_common.sh@10 -- # set +x 00:06:27.681 3 00:06:27.681 16:18:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:27.681 16:18:01 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:27.681 16:18:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:27.681 16:18:01 -- common/autotest_common.sh@10 -- # set +x 00:06:27.681 4 00:06:27.681 16:18:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:27.681 16:18:01 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:27.681 16:18:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:27.681 16:18:01 -- common/autotest_common.sh@10 -- # set +x 00:06:27.681 5 00:06:27.681 16:18:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:27.681 16:18:01 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:27.681 16:18:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:27.681 16:18:01 -- common/autotest_common.sh@10 -- # set +x 00:06:27.681 6 00:06:27.681 16:18:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:27.681 16:18:01 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:27.681 16:18:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:27.682 16:18:01 -- common/autotest_common.sh@10 -- # set +x 00:06:27.682 7 00:06:27.682 16:18:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:27.682 16:18:01 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:27.682 16:18:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:27.682 16:18:01 -- common/autotest_common.sh@10 -- # set +x 00:06:27.682 8 00:06:27.682 16:18:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:27.682 16:18:01 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:27.682 16:18:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:27.682 16:18:01 -- common/autotest_common.sh@10 -- # set +x 00:06:27.682 9 00:06:27.682 16:18:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:27.682 16:18:01 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:27.682 16:18:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:27.682 16:18:01 -- common/autotest_common.sh@10 -- # set +x 00:06:27.682 10 00:06:27.682 16:18:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:27.682 16:18:01 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:27.682 16:18:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:27.682 16:18:01 -- common/autotest_common.sh@10 -- # set +x 00:06:27.682 16:18:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:27.682 16:18:01 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:27.682 16:18:01 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:27.682 16:18:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:27.682 16:18:01 -- common/autotest_common.sh@10 -- # set +x 00:06:27.682 16:18:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:27.682 16:18:01 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:27.682 16:18:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:27.682 16:18:01 -- common/autotest_common.sh@10 -- # set +x 00:06:29.581 16:18:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:29.581 16:18:03 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:29.582 16:18:03 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:29.582 16:18:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:29.582 16:18:03 -- common/autotest_common.sh@10 -- # set +x 00:06:30.516 16:18:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:30.516 00:06:30.516 real 0m2.613s 00:06:30.516 user 0m0.019s 00:06:30.516 sys 0m0.007s 00:06:30.516 16:18:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:30.516 16:18:04 -- common/autotest_common.sh@10 -- # set +x 00:06:30.516 ************************************ 00:06:30.516 END TEST scheduler_create_thread 00:06:30.516 ************************************ 00:06:30.516 16:18:04 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:30.516 16:18:04 -- scheduler/scheduler.sh@46 -- # killprocess 61598 00:06:30.516 16:18:04 -- common/autotest_common.sh@936 -- # '[' -z 61598 ']' 00:06:30.516 16:18:04 -- common/autotest_common.sh@940 -- # kill -0 61598 00:06:30.516 16:18:04 -- common/autotest_common.sh@941 -- # uname 00:06:30.516 16:18:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:30.516 16:18:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61598 00:06:30.516 killing process with pid 61598 00:06:30.516 16:18:04 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:30.516 16:18:04 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:30.516 16:18:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61598' 00:06:30.516 16:18:04 -- common/autotest_common.sh@955 -- # kill 61598 00:06:30.516 16:18:04 -- common/autotest_common.sh@960 -- # wait 61598 00:06:30.774 [2024-04-17 16:18:04.679652] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:31.033 00:06:31.033 real 0m4.641s 00:06:31.033 user 0m8.589s 00:06:31.033 sys 0m0.418s 00:06:31.033 ************************************ 00:06:31.033 END TEST event_scheduler 00:06:31.033 ************************************ 00:06:31.033 16:18:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:31.033 16:18:04 -- common/autotest_common.sh@10 -- # set +x 00:06:31.033 16:18:04 -- event/event.sh@51 -- # modprobe -n nbd 00:06:31.033 16:18:04 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:31.033 16:18:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:31.033 16:18:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.033 16:18:04 -- common/autotest_common.sh@10 -- # set +x 00:06:31.033 ************************************ 00:06:31.033 START TEST app_repeat 00:06:31.033 ************************************ 00:06:31.033 16:18:05 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:06:31.033 16:18:05 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.033 16:18:05 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.033 16:18:05 -- event/event.sh@13 -- # local nbd_list 00:06:31.033 16:18:05 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.033 16:18:05 -- event/event.sh@14 -- # local bdev_list 00:06:31.033 16:18:05 -- event/event.sh@15 -- # local repeat_times=4 00:06:31.033 16:18:05 -- event/event.sh@17 -- # modprobe nbd 00:06:31.033 Process app_repeat pid: 61723 00:06:31.033 spdk_app_start Round 0 00:06:31.033 16:18:05 -- event/event.sh@19 -- # repeat_pid=61723 00:06:31.033 16:18:05 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:31.033 16:18:05 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 61723' 00:06:31.033 16:18:05 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:31.033 16:18:05 -- event/event.sh@23 -- # for i in {0..2} 00:06:31.033 16:18:05 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:31.033 16:18:05 -- event/event.sh@25 -- # waitforlisten 61723 /var/tmp/spdk-nbd.sock 00:06:31.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:31.033 16:18:05 -- common/autotest_common.sh@817 -- # '[' -z 61723 ']' 00:06:31.033 16:18:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:31.033 16:18:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:31.033 16:18:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:31.033 16:18:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:31.033 16:18:05 -- common/autotest_common.sh@10 -- # set +x 00:06:31.292 [2024-04-17 16:18:05.087519] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:06:31.292 [2024-04-17 16:18:05.087622] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61723 ] 00:06:31.292 [2024-04-17 16:18:05.224612] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:31.550 [2024-04-17 16:18:05.345616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.550 [2024-04-17 16:18:05.345628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.117 16:18:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:32.117 16:18:06 -- common/autotest_common.sh@850 -- # return 0 00:06:32.117 16:18:06 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:32.375 Malloc0 00:06:32.375 16:18:06 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:32.941 Malloc1 00:06:32.941 16:18:06 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:32.941 16:18:06 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.941 16:18:06 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:32.941 16:18:06 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:32.941 16:18:06 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.941 16:18:06 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:32.941 16:18:06 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:32.941 16:18:06 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.941 16:18:06 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:32.941 16:18:06 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:32.941 16:18:06 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.941 16:18:06 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:32.941 16:18:06 -- bdev/nbd_common.sh@12 -- # local i 00:06:32.941 16:18:06 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:32.941 16:18:06 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.941 16:18:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:33.199 /dev/nbd0 00:06:33.199 16:18:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:33.199 16:18:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:33.199 16:18:07 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:06:33.199 16:18:07 -- common/autotest_common.sh@855 -- # local i 00:06:33.199 16:18:07 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:33.199 16:18:07 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:33.199 16:18:07 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:06:33.199 16:18:07 -- common/autotest_common.sh@859 -- # break 00:06:33.199 16:18:07 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:33.199 16:18:07 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:33.199 16:18:07 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:33.199 1+0 records in 00:06:33.199 1+0 records out 00:06:33.199 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033436 s, 12.3 MB/s 00:06:33.199 16:18:07 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:33.199 16:18:07 -- common/autotest_common.sh@872 -- # size=4096 00:06:33.199 16:18:07 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:33.199 16:18:07 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:33.199 16:18:07 -- common/autotest_common.sh@875 -- # return 0 00:06:33.199 16:18:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:33.199 16:18:07 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:33.199 16:18:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:33.457 /dev/nbd1 00:06:33.457 16:18:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:33.457 16:18:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:33.457 16:18:07 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:06:33.457 16:18:07 -- common/autotest_common.sh@855 -- # local i 00:06:33.457 16:18:07 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:33.457 16:18:07 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:33.457 16:18:07 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:06:33.457 16:18:07 -- common/autotest_common.sh@859 -- # break 00:06:33.457 16:18:07 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:33.457 16:18:07 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:33.457 16:18:07 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:33.457 1+0 records in 00:06:33.457 1+0 records out 00:06:33.457 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000600834 s, 6.8 MB/s 00:06:33.457 16:18:07 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:33.457 16:18:07 -- common/autotest_common.sh@872 -- # size=4096 00:06:33.457 16:18:07 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:33.457 16:18:07 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:33.457 16:18:07 -- common/autotest_common.sh@875 -- # return 0 00:06:33.457 16:18:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:33.457 16:18:07 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:33.457 16:18:07 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:33.457 16:18:07 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.457 16:18:07 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:33.715 16:18:07 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:33.715 { 00:06:33.715 "bdev_name": "Malloc0", 00:06:33.715 "nbd_device": "/dev/nbd0" 00:06:33.715 }, 00:06:33.715 { 00:06:33.715 "bdev_name": "Malloc1", 00:06:33.715 "nbd_device": "/dev/nbd1" 00:06:33.715 } 00:06:33.715 ]' 00:06:33.715 16:18:07 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.715 16:18:07 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:33.715 { 00:06:33.715 "bdev_name": "Malloc0", 00:06:33.715 "nbd_device": "/dev/nbd0" 00:06:33.715 }, 00:06:33.715 { 00:06:33.715 "bdev_name": "Malloc1", 00:06:33.715 "nbd_device": "/dev/nbd1" 00:06:33.715 } 00:06:33.715 ]' 00:06:33.973 16:18:07 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:33.973 /dev/nbd1' 00:06:33.973 16:18:07 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:33.973 /dev/nbd1' 00:06:33.973 16:18:07 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.973 16:18:07 -- bdev/nbd_common.sh@65 -- # count=2 00:06:33.973 16:18:07 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:33.973 16:18:07 -- bdev/nbd_common.sh@95 -- # count=2 00:06:33.973 16:18:07 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:33.973 16:18:07 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:33.973 16:18:07 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.973 16:18:07 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:33.973 16:18:07 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:33.973 16:18:07 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:33.973 16:18:07 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:33.973 16:18:07 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:33.973 256+0 records in 00:06:33.973 256+0 records out 00:06:33.973 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00475179 s, 221 MB/s 00:06:33.973 16:18:07 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.973 16:18:07 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:33.973 256+0 records in 00:06:33.973 256+0 records out 00:06:33.973 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251562 s, 41.7 MB/s 00:06:33.973 16:18:07 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.973 16:18:07 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:33.973 256+0 records in 00:06:33.973 256+0 records out 00:06:33.973 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.029943 s, 35.0 MB/s 00:06:33.973 16:18:07 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:33.973 16:18:07 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.973 16:18:07 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:33.973 16:18:07 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:33.973 16:18:07 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:33.973 16:18:07 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:33.973 16:18:07 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:33.973 16:18:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.973 16:18:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:33.973 16:18:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.973 16:18:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:33.973 16:18:07 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:33.973 16:18:07 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:33.973 16:18:07 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.973 16:18:07 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.973 16:18:07 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:33.973 16:18:07 -- bdev/nbd_common.sh@51 -- # local i 00:06:33.973 16:18:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.973 16:18:07 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:34.232 16:18:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:34.232 16:18:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:34.232 16:18:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:34.232 16:18:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.232 16:18:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.232 16:18:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:34.232 16:18:08 -- bdev/nbd_common.sh@41 -- # break 00:06:34.232 16:18:08 -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.232 16:18:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.232 16:18:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:34.490 16:18:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:34.490 16:18:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:34.490 16:18:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:34.490 16:18:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.490 16:18:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.490 16:18:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:34.490 16:18:08 -- bdev/nbd_common.sh@41 -- # break 00:06:34.490 16:18:08 -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.490 16:18:08 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:34.490 16:18:08 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.490 16:18:08 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:35.057 16:18:08 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:35.057 16:18:08 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:35.057 16:18:08 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:35.057 16:18:08 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:35.057 16:18:08 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:35.057 16:18:08 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:35.057 16:18:08 -- bdev/nbd_common.sh@65 -- # true 00:06:35.057 16:18:08 -- bdev/nbd_common.sh@65 -- # count=0 00:06:35.057 16:18:08 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:35.057 16:18:08 -- bdev/nbd_common.sh@104 -- # count=0 00:06:35.057 16:18:08 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:35.057 16:18:08 -- bdev/nbd_common.sh@109 -- # return 0 00:06:35.057 16:18:08 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:35.315 16:18:09 -- event/event.sh@35 -- # sleep 3 00:06:35.577 [2024-04-17 16:18:09.517814] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:35.848 [2024-04-17 16:18:09.643479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.848 [2024-04-17 16:18:09.643489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.848 [2024-04-17 16:18:09.698139] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:35.848 [2024-04-17 16:18:09.698206] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:38.378 16:18:12 -- event/event.sh@23 -- # for i in {0..2} 00:06:38.378 spdk_app_start Round 1 00:06:38.378 16:18:12 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:38.378 16:18:12 -- event/event.sh@25 -- # waitforlisten 61723 /var/tmp/spdk-nbd.sock 00:06:38.378 16:18:12 -- common/autotest_common.sh@817 -- # '[' -z 61723 ']' 00:06:38.378 16:18:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:38.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:38.378 16:18:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:38.378 16:18:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:38.378 16:18:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:38.378 16:18:12 -- common/autotest_common.sh@10 -- # set +x 00:06:38.636 16:18:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:38.636 16:18:12 -- common/autotest_common.sh@850 -- # return 0 00:06:38.636 16:18:12 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:38.895 Malloc0 00:06:38.895 16:18:12 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:39.153 Malloc1 00:06:39.153 16:18:13 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:39.153 16:18:13 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.153 16:18:13 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:39.153 16:18:13 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:39.153 16:18:13 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.153 16:18:13 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:39.153 16:18:13 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:39.153 16:18:13 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.153 16:18:13 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:39.153 16:18:13 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:39.153 16:18:13 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.153 16:18:13 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:39.153 16:18:13 -- bdev/nbd_common.sh@12 -- # local i 00:06:39.153 16:18:13 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:39.153 16:18:13 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.153 16:18:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:39.413 /dev/nbd0 00:06:39.413 16:18:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:39.413 16:18:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:39.413 16:18:13 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:06:39.413 16:18:13 -- common/autotest_common.sh@855 -- # local i 00:06:39.413 16:18:13 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:39.413 16:18:13 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:39.413 16:18:13 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:06:39.413 16:18:13 -- common/autotest_common.sh@859 -- # break 00:06:39.413 16:18:13 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:39.413 16:18:13 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:39.413 16:18:13 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:39.413 1+0 records in 00:06:39.414 1+0 records out 00:06:39.414 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002636 s, 15.5 MB/s 00:06:39.414 16:18:13 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:39.414 16:18:13 -- common/autotest_common.sh@872 -- # size=4096 00:06:39.414 16:18:13 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:39.414 16:18:13 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:39.414 16:18:13 -- common/autotest_common.sh@875 -- # return 0 00:06:39.414 16:18:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:39.414 16:18:13 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.414 16:18:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:39.673 /dev/nbd1 00:06:39.673 16:18:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:39.673 16:18:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:39.673 16:18:13 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:06:39.673 16:18:13 -- common/autotest_common.sh@855 -- # local i 00:06:39.673 16:18:13 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:39.673 16:18:13 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:39.673 16:18:13 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:06:39.673 16:18:13 -- common/autotest_common.sh@859 -- # break 00:06:39.673 16:18:13 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:39.673 16:18:13 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:39.673 16:18:13 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:39.673 1+0 records in 00:06:39.673 1+0 records out 00:06:39.673 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288307 s, 14.2 MB/s 00:06:39.673 16:18:13 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:39.673 16:18:13 -- common/autotest_common.sh@872 -- # size=4096 00:06:39.673 16:18:13 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:39.673 16:18:13 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:39.673 16:18:13 -- common/autotest_common.sh@875 -- # return 0 00:06:39.673 16:18:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:39.673 16:18:13 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.673 16:18:13 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:39.673 16:18:13 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.673 16:18:13 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:39.932 16:18:13 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:39.932 { 00:06:39.932 "bdev_name": "Malloc0", 00:06:39.932 "nbd_device": "/dev/nbd0" 00:06:39.932 }, 00:06:39.932 { 00:06:39.932 "bdev_name": "Malloc1", 00:06:39.932 "nbd_device": "/dev/nbd1" 00:06:39.932 } 00:06:39.932 ]' 00:06:39.932 16:18:13 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:39.932 { 00:06:39.932 "bdev_name": "Malloc0", 00:06:39.932 "nbd_device": "/dev/nbd0" 00:06:39.932 }, 00:06:39.932 { 00:06:39.932 "bdev_name": "Malloc1", 00:06:39.932 "nbd_device": "/dev/nbd1" 00:06:39.932 } 00:06:39.932 ]' 00:06:39.932 16:18:13 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:40.191 16:18:13 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:40.191 /dev/nbd1' 00:06:40.191 16:18:13 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:40.191 /dev/nbd1' 00:06:40.191 16:18:13 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:40.191 16:18:14 -- bdev/nbd_common.sh@65 -- # count=2 00:06:40.191 16:18:14 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:40.191 16:18:14 -- bdev/nbd_common.sh@95 -- # count=2 00:06:40.191 16:18:14 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:40.191 16:18:14 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:40.191 16:18:14 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.191 16:18:14 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:40.191 16:18:14 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:40.191 16:18:14 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:40.191 16:18:14 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:40.191 16:18:14 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:40.191 256+0 records in 00:06:40.191 256+0 records out 00:06:40.191 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00679824 s, 154 MB/s 00:06:40.191 16:18:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:40.191 16:18:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:40.191 256+0 records in 00:06:40.191 256+0 records out 00:06:40.191 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244934 s, 42.8 MB/s 00:06:40.191 16:18:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:40.191 16:18:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:40.191 256+0 records in 00:06:40.191 256+0 records out 00:06:40.191 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0277245 s, 37.8 MB/s 00:06:40.191 16:18:14 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:40.191 16:18:14 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.191 16:18:14 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:40.191 16:18:14 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:40.191 16:18:14 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:40.191 16:18:14 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:40.191 16:18:14 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:40.191 16:18:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:40.191 16:18:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:40.191 16:18:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:40.191 16:18:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:40.191 16:18:14 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:40.192 16:18:14 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:40.192 16:18:14 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.192 16:18:14 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.192 16:18:14 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:40.192 16:18:14 -- bdev/nbd_common.sh@51 -- # local i 00:06:40.192 16:18:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.192 16:18:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:40.450 16:18:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:40.450 16:18:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:40.450 16:18:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:40.450 16:18:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.450 16:18:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.450 16:18:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:40.450 16:18:14 -- bdev/nbd_common.sh@41 -- # break 00:06:40.450 16:18:14 -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.450 16:18:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.450 16:18:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:40.710 16:18:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:40.710 16:18:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:40.710 16:18:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:40.710 16:18:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.710 16:18:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.710 16:18:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:40.710 16:18:14 -- bdev/nbd_common.sh@41 -- # break 00:06:40.710 16:18:14 -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.710 16:18:14 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:40.710 16:18:14 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.710 16:18:14 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:40.968 16:18:14 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:40.968 16:18:14 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:40.968 16:18:14 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:40.968 16:18:14 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:40.968 16:18:14 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:40.968 16:18:14 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:40.968 16:18:14 -- bdev/nbd_common.sh@65 -- # true 00:06:40.968 16:18:14 -- bdev/nbd_common.sh@65 -- # count=0 00:06:40.968 16:18:14 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:40.968 16:18:14 -- bdev/nbd_common.sh@104 -- # count=0 00:06:40.968 16:18:14 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:40.968 16:18:14 -- bdev/nbd_common.sh@109 -- # return 0 00:06:40.968 16:18:14 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:41.226 16:18:15 -- event/event.sh@35 -- # sleep 3 00:06:41.484 [2024-04-17 16:18:15.471561] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:41.743 [2024-04-17 16:18:15.584720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.743 [2024-04-17 16:18:15.584731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.743 [2024-04-17 16:18:15.640064] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:41.743 [2024-04-17 16:18:15.640123] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:44.277 spdk_app_start Round 2 00:06:44.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:44.277 16:18:18 -- event/event.sh@23 -- # for i in {0..2} 00:06:44.277 16:18:18 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:44.277 16:18:18 -- event/event.sh@25 -- # waitforlisten 61723 /var/tmp/spdk-nbd.sock 00:06:44.277 16:18:18 -- common/autotest_common.sh@817 -- # '[' -z 61723 ']' 00:06:44.277 16:18:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:44.277 16:18:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:44.277 16:18:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:44.277 16:18:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:44.277 16:18:18 -- common/autotest_common.sh@10 -- # set +x 00:06:44.536 16:18:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:44.536 16:18:18 -- common/autotest_common.sh@850 -- # return 0 00:06:44.536 16:18:18 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:44.794 Malloc0 00:06:44.795 16:18:18 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:45.054 Malloc1 00:06:45.054 16:18:18 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:45.054 16:18:18 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.054 16:18:18 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:45.054 16:18:18 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:45.054 16:18:18 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.054 16:18:18 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:45.054 16:18:18 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:45.054 16:18:18 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.054 16:18:18 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:45.054 16:18:18 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:45.054 16:18:18 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.054 16:18:18 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:45.054 16:18:18 -- bdev/nbd_common.sh@12 -- # local i 00:06:45.054 16:18:18 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:45.054 16:18:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.054 16:18:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:45.313 /dev/nbd0 00:06:45.313 16:18:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:45.313 16:18:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:45.313 16:18:19 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:06:45.313 16:18:19 -- common/autotest_common.sh@855 -- # local i 00:06:45.313 16:18:19 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:45.313 16:18:19 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:45.313 16:18:19 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:06:45.313 16:18:19 -- common/autotest_common.sh@859 -- # break 00:06:45.313 16:18:19 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:45.313 16:18:19 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:45.313 16:18:19 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:45.313 1+0 records in 00:06:45.313 1+0 records out 00:06:45.313 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320111 s, 12.8 MB/s 00:06:45.313 16:18:19 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:45.313 16:18:19 -- common/autotest_common.sh@872 -- # size=4096 00:06:45.313 16:18:19 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:45.313 16:18:19 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:45.313 16:18:19 -- common/autotest_common.sh@875 -- # return 0 00:06:45.313 16:18:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.313 16:18:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.313 16:18:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:45.572 /dev/nbd1 00:06:45.572 16:18:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:45.572 16:18:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:45.572 16:18:19 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:06:45.572 16:18:19 -- common/autotest_common.sh@855 -- # local i 00:06:45.573 16:18:19 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:45.573 16:18:19 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:45.573 16:18:19 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:06:45.573 16:18:19 -- common/autotest_common.sh@859 -- # break 00:06:45.573 16:18:19 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:45.573 16:18:19 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:45.573 16:18:19 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:45.573 1+0 records in 00:06:45.573 1+0 records out 00:06:45.573 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278648 s, 14.7 MB/s 00:06:45.573 16:18:19 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:45.573 16:18:19 -- common/autotest_common.sh@872 -- # size=4096 00:06:45.573 16:18:19 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:45.573 16:18:19 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:45.573 16:18:19 -- common/autotest_common.sh@875 -- # return 0 00:06:45.573 16:18:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.573 16:18:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.573 16:18:19 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:45.573 16:18:19 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.573 16:18:19 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.141 16:18:19 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:46.141 { 00:06:46.141 "bdev_name": "Malloc0", 00:06:46.141 "nbd_device": "/dev/nbd0" 00:06:46.141 }, 00:06:46.141 { 00:06:46.141 "bdev_name": "Malloc1", 00:06:46.141 "nbd_device": "/dev/nbd1" 00:06:46.141 } 00:06:46.141 ]' 00:06:46.141 16:18:19 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:46.141 { 00:06:46.141 "bdev_name": "Malloc0", 00:06:46.141 "nbd_device": "/dev/nbd0" 00:06:46.141 }, 00:06:46.141 { 00:06:46.141 "bdev_name": "Malloc1", 00:06:46.141 "nbd_device": "/dev/nbd1" 00:06:46.141 } 00:06:46.141 ]' 00:06:46.141 16:18:19 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.141 16:18:19 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:46.141 /dev/nbd1' 00:06:46.141 16:18:19 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:46.141 /dev/nbd1' 00:06:46.141 16:18:19 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.141 16:18:19 -- bdev/nbd_common.sh@65 -- # count=2 00:06:46.141 16:18:19 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:46.141 16:18:19 -- bdev/nbd_common.sh@95 -- # count=2 00:06:46.141 16:18:19 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:46.141 16:18:19 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:46.141 16:18:19 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.141 16:18:19 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:46.141 16:18:19 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:46.141 16:18:19 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:46.141 16:18:19 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:46.141 16:18:19 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:46.141 256+0 records in 00:06:46.141 256+0 records out 00:06:46.141 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0096664 s, 108 MB/s 00:06:46.141 16:18:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.141 16:18:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:46.141 256+0 records in 00:06:46.141 256+0 records out 00:06:46.141 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0248344 s, 42.2 MB/s 00:06:46.141 16:18:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.141 16:18:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:46.141 256+0 records in 00:06:46.141 256+0 records out 00:06:46.141 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0263489 s, 39.8 MB/s 00:06:46.141 16:18:20 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:46.141 16:18:20 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.141 16:18:20 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:46.141 16:18:20 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:46.141 16:18:20 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:46.141 16:18:20 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:46.141 16:18:20 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:46.141 16:18:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:46.142 16:18:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:46.142 16:18:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:46.142 16:18:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:46.142 16:18:20 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:46.142 16:18:20 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:46.142 16:18:20 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.142 16:18:20 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.142 16:18:20 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:46.142 16:18:20 -- bdev/nbd_common.sh@51 -- # local i 00:06:46.142 16:18:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.142 16:18:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:46.400 16:18:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:46.400 16:18:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:46.400 16:18:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:46.400 16:18:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.400 16:18:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.400 16:18:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:46.400 16:18:20 -- bdev/nbd_common.sh@41 -- # break 00:06:46.400 16:18:20 -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.400 16:18:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.400 16:18:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:46.659 16:18:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:46.659 16:18:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:46.659 16:18:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:46.659 16:18:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.659 16:18:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.659 16:18:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:46.659 16:18:20 -- bdev/nbd_common.sh@41 -- # break 00:06:46.659 16:18:20 -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.659 16:18:20 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:46.659 16:18:20 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.659 16:18:20 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.917 16:18:20 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:46.917 16:18:20 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:46.917 16:18:20 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.917 16:18:20 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:46.917 16:18:20 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.917 16:18:20 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:46.917 16:18:20 -- bdev/nbd_common.sh@65 -- # true 00:06:46.917 16:18:20 -- bdev/nbd_common.sh@65 -- # count=0 00:06:46.917 16:18:20 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:46.917 16:18:20 -- bdev/nbd_common.sh@104 -- # count=0 00:06:46.917 16:18:20 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:46.917 16:18:20 -- bdev/nbd_common.sh@109 -- # return 0 00:06:46.917 16:18:20 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:47.175 16:18:21 -- event/event.sh@35 -- # sleep 3 00:06:47.433 [2024-04-17 16:18:21.401851] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:47.691 [2024-04-17 16:18:21.524643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.691 [2024-04-17 16:18:21.524649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.691 [2024-04-17 16:18:21.582060] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:47.691 [2024-04-17 16:18:21.582120] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:50.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:50.222 16:18:24 -- event/event.sh@38 -- # waitforlisten 61723 /var/tmp/spdk-nbd.sock 00:06:50.222 16:18:24 -- common/autotest_common.sh@817 -- # '[' -z 61723 ']' 00:06:50.222 16:18:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:50.222 16:18:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:50.222 16:18:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:50.222 16:18:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:50.222 16:18:24 -- common/autotest_common.sh@10 -- # set +x 00:06:50.787 16:18:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:50.787 16:18:24 -- common/autotest_common.sh@850 -- # return 0 00:06:50.787 16:18:24 -- event/event.sh@39 -- # killprocess 61723 00:06:50.787 16:18:24 -- common/autotest_common.sh@936 -- # '[' -z 61723 ']' 00:06:50.787 16:18:24 -- common/autotest_common.sh@940 -- # kill -0 61723 00:06:50.787 16:18:24 -- common/autotest_common.sh@941 -- # uname 00:06:50.787 16:18:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:50.787 16:18:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61723 00:06:50.787 killing process with pid 61723 00:06:50.787 16:18:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:50.787 16:18:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:50.787 16:18:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61723' 00:06:50.787 16:18:24 -- common/autotest_common.sh@955 -- # kill 61723 00:06:50.787 16:18:24 -- common/autotest_common.sh@960 -- # wait 61723 00:06:50.787 spdk_app_start is called in Round 0. 00:06:50.787 Shutdown signal received, stop current app iteration 00:06:50.787 Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 reinitialization... 00:06:50.787 spdk_app_start is called in Round 1. 00:06:50.787 Shutdown signal received, stop current app iteration 00:06:50.787 Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 reinitialization... 00:06:50.787 spdk_app_start is called in Round 2. 00:06:50.787 Shutdown signal received, stop current app iteration 00:06:50.787 Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 reinitialization... 00:06:50.787 spdk_app_start is called in Round 3. 00:06:50.787 Shutdown signal received, stop current app iteration 00:06:51.045 16:18:24 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:51.046 16:18:24 -- event/event.sh@42 -- # return 0 00:06:51.046 00:06:51.046 real 0m19.778s 00:06:51.046 user 0m44.481s 00:06:51.046 sys 0m3.076s 00:06:51.046 16:18:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:51.046 16:18:24 -- common/autotest_common.sh@10 -- # set +x 00:06:51.046 ************************************ 00:06:51.046 END TEST app_repeat 00:06:51.046 ************************************ 00:06:51.046 16:18:24 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:51.046 16:18:24 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:51.046 16:18:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:51.046 16:18:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:51.046 16:18:24 -- common/autotest_common.sh@10 -- # set +x 00:06:51.046 ************************************ 00:06:51.046 START TEST cpu_locks 00:06:51.046 ************************************ 00:06:51.046 16:18:24 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:51.046 * Looking for test storage... 00:06:51.046 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:51.046 16:18:25 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:51.046 16:18:25 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:51.046 16:18:25 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:51.046 16:18:25 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:51.046 16:18:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:51.046 16:18:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:51.046 16:18:25 -- common/autotest_common.sh@10 -- # set +x 00:06:51.304 ************************************ 00:06:51.304 START TEST default_locks 00:06:51.304 ************************************ 00:06:51.304 16:18:25 -- common/autotest_common.sh@1111 -- # default_locks 00:06:51.304 16:18:25 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=62371 00:06:51.304 16:18:25 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:51.304 16:18:25 -- event/cpu_locks.sh@47 -- # waitforlisten 62371 00:06:51.304 16:18:25 -- common/autotest_common.sh@817 -- # '[' -z 62371 ']' 00:06:51.304 16:18:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.304 16:18:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:51.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.305 16:18:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.305 16:18:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:51.305 16:18:25 -- common/autotest_common.sh@10 -- # set +x 00:06:51.305 [2024-04-17 16:18:25.167538] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:06:51.305 [2024-04-17 16:18:25.167846] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62371 ] 00:06:51.305 [2024-04-17 16:18:25.305356] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.562 [2024-04-17 16:18:25.439454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.495 16:18:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:52.495 16:18:26 -- common/autotest_common.sh@850 -- # return 0 00:06:52.495 16:18:26 -- event/cpu_locks.sh@49 -- # locks_exist 62371 00:06:52.495 16:18:26 -- event/cpu_locks.sh@22 -- # lslocks -p 62371 00:06:52.495 16:18:26 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:52.753 16:18:26 -- event/cpu_locks.sh@50 -- # killprocess 62371 00:06:52.753 16:18:26 -- common/autotest_common.sh@936 -- # '[' -z 62371 ']' 00:06:52.753 16:18:26 -- common/autotest_common.sh@940 -- # kill -0 62371 00:06:52.753 16:18:26 -- common/autotest_common.sh@941 -- # uname 00:06:52.753 16:18:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:52.753 16:18:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62371 00:06:52.753 16:18:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:52.753 killing process with pid 62371 00:06:52.753 16:18:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:52.753 16:18:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62371' 00:06:52.753 16:18:26 -- common/autotest_common.sh@955 -- # kill 62371 00:06:52.753 16:18:26 -- common/autotest_common.sh@960 -- # wait 62371 00:06:53.319 16:18:27 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 62371 00:06:53.319 16:18:27 -- common/autotest_common.sh@638 -- # local es=0 00:06:53.319 16:18:27 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 62371 00:06:53.319 16:18:27 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:06:53.319 16:18:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:53.319 16:18:27 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:06:53.319 16:18:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:53.319 16:18:27 -- common/autotest_common.sh@641 -- # waitforlisten 62371 00:06:53.319 16:18:27 -- common/autotest_common.sh@817 -- # '[' -z 62371 ']' 00:06:53.319 16:18:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.319 16:18:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:53.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.319 16:18:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.319 16:18:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:53.319 16:18:27 -- common/autotest_common.sh@10 -- # set +x 00:06:53.319 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (62371) - No such process 00:06:53.319 ERROR: process (pid: 62371) is no longer running 00:06:53.319 16:18:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:53.319 16:18:27 -- common/autotest_common.sh@850 -- # return 1 00:06:53.319 16:18:27 -- common/autotest_common.sh@641 -- # es=1 00:06:53.319 16:18:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:53.319 16:18:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:53.319 16:18:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:53.319 16:18:27 -- event/cpu_locks.sh@54 -- # no_locks 00:06:53.319 16:18:27 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:53.319 16:18:27 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:53.319 16:18:27 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:53.319 00:06:53.319 real 0m2.037s 00:06:53.319 user 0m2.242s 00:06:53.319 sys 0m0.596s 00:06:53.319 16:18:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:53.319 16:18:27 -- common/autotest_common.sh@10 -- # set +x 00:06:53.319 ************************************ 00:06:53.319 END TEST default_locks 00:06:53.319 ************************************ 00:06:53.319 16:18:27 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:53.319 16:18:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:53.319 16:18:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:53.319 16:18:27 -- common/autotest_common.sh@10 -- # set +x 00:06:53.319 ************************************ 00:06:53.319 START TEST default_locks_via_rpc 00:06:53.319 ************************************ 00:06:53.319 16:18:27 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:06:53.319 16:18:27 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=62440 00:06:53.319 16:18:27 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:53.319 16:18:27 -- event/cpu_locks.sh@63 -- # waitforlisten 62440 00:06:53.319 16:18:27 -- common/autotest_common.sh@817 -- # '[' -z 62440 ']' 00:06:53.319 16:18:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.319 16:18:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:53.319 16:18:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.319 16:18:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:53.319 16:18:27 -- common/autotest_common.sh@10 -- # set +x 00:06:53.319 [2024-04-17 16:18:27.328009] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:06:53.319 [2024-04-17 16:18:27.328113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62440 ] 00:06:53.577 [2024-04-17 16:18:27.468920] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.577 [2024-04-17 16:18:27.602806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.507 16:18:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:54.507 16:18:28 -- common/autotest_common.sh@850 -- # return 0 00:06:54.507 16:18:28 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:54.507 16:18:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:54.507 16:18:28 -- common/autotest_common.sh@10 -- # set +x 00:06:54.507 16:18:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:54.507 16:18:28 -- event/cpu_locks.sh@67 -- # no_locks 00:06:54.507 16:18:28 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:54.507 16:18:28 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:54.507 16:18:28 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:54.507 16:18:28 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:54.507 16:18:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:54.507 16:18:28 -- common/autotest_common.sh@10 -- # set +x 00:06:54.507 16:18:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:54.507 16:18:28 -- event/cpu_locks.sh@71 -- # locks_exist 62440 00:06:54.507 16:18:28 -- event/cpu_locks.sh@22 -- # lslocks -p 62440 00:06:54.507 16:18:28 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:54.766 16:18:28 -- event/cpu_locks.sh@73 -- # killprocess 62440 00:06:54.766 16:18:28 -- common/autotest_common.sh@936 -- # '[' -z 62440 ']' 00:06:54.766 16:18:28 -- common/autotest_common.sh@940 -- # kill -0 62440 00:06:54.766 16:18:28 -- common/autotest_common.sh@941 -- # uname 00:06:54.766 16:18:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:54.766 16:18:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62440 00:06:54.766 16:18:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:54.766 killing process with pid 62440 00:06:54.766 16:18:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:54.766 16:18:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62440' 00:06:54.766 16:18:28 -- common/autotest_common.sh@955 -- # kill 62440 00:06:54.766 16:18:28 -- common/autotest_common.sh@960 -- # wait 62440 00:06:55.023 00:06:55.023 real 0m1.798s 00:06:55.023 user 0m1.861s 00:06:55.023 sys 0m0.570s 00:06:55.023 ************************************ 00:06:55.023 END TEST default_locks_via_rpc 00:06:55.023 ************************************ 00:06:55.023 16:18:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:55.023 16:18:29 -- common/autotest_common.sh@10 -- # set +x 00:06:55.281 16:18:29 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:55.281 16:18:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:55.281 16:18:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:55.281 16:18:29 -- common/autotest_common.sh@10 -- # set +x 00:06:55.281 ************************************ 00:06:55.281 START TEST non_locking_app_on_locked_coremask 00:06:55.281 ************************************ 00:06:55.281 16:18:29 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:06:55.281 16:18:29 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=62513 00:06:55.281 16:18:29 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:55.281 16:18:29 -- event/cpu_locks.sh@81 -- # waitforlisten 62513 /var/tmp/spdk.sock 00:06:55.281 16:18:29 -- common/autotest_common.sh@817 -- # '[' -z 62513 ']' 00:06:55.281 16:18:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.281 16:18:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:55.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.281 16:18:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.281 16:18:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:55.281 16:18:29 -- common/autotest_common.sh@10 -- # set +x 00:06:55.281 [2024-04-17 16:18:29.242292] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:06:55.281 [2024-04-17 16:18:29.242428] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62513 ] 00:06:55.538 [2024-04-17 16:18:29.388478] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.538 [2024-04-17 16:18:29.519893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.472 16:18:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:56.472 16:18:30 -- common/autotest_common.sh@850 -- # return 0 00:06:56.472 16:18:30 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=62541 00:06:56.472 16:18:30 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:56.472 16:18:30 -- event/cpu_locks.sh@85 -- # waitforlisten 62541 /var/tmp/spdk2.sock 00:06:56.472 16:18:30 -- common/autotest_common.sh@817 -- # '[' -z 62541 ']' 00:06:56.472 16:18:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:56.472 16:18:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:56.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:56.472 16:18:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:56.472 16:18:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:56.472 16:18:30 -- common/autotest_common.sh@10 -- # set +x 00:06:56.472 [2024-04-17 16:18:30.311849] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:06:56.472 [2024-04-17 16:18:30.311958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62541 ] 00:06:56.472 [2024-04-17 16:18:30.455113] app.c: 818:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:56.472 [2024-04-17 16:18:30.455174] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.730 [2024-04-17 16:18:30.691635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.296 16:18:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:57.296 16:18:31 -- common/autotest_common.sh@850 -- # return 0 00:06:57.296 16:18:31 -- event/cpu_locks.sh@87 -- # locks_exist 62513 00:06:57.296 16:18:31 -- event/cpu_locks.sh@22 -- # lslocks -p 62513 00:06:57.296 16:18:31 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:58.229 16:18:32 -- event/cpu_locks.sh@89 -- # killprocess 62513 00:06:58.229 16:18:32 -- common/autotest_common.sh@936 -- # '[' -z 62513 ']' 00:06:58.229 16:18:32 -- common/autotest_common.sh@940 -- # kill -0 62513 00:06:58.229 16:18:32 -- common/autotest_common.sh@941 -- # uname 00:06:58.229 16:18:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:58.229 16:18:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62513 00:06:58.229 16:18:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:58.229 killing process with pid 62513 00:06:58.229 16:18:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:58.229 16:18:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62513' 00:06:58.229 16:18:32 -- common/autotest_common.sh@955 -- # kill 62513 00:06:58.229 16:18:32 -- common/autotest_common.sh@960 -- # wait 62513 00:06:59.164 16:18:32 -- event/cpu_locks.sh@90 -- # killprocess 62541 00:06:59.164 16:18:32 -- common/autotest_common.sh@936 -- # '[' -z 62541 ']' 00:06:59.164 16:18:32 -- common/autotest_common.sh@940 -- # kill -0 62541 00:06:59.164 16:18:32 -- common/autotest_common.sh@941 -- # uname 00:06:59.164 16:18:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:59.164 16:18:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62541 00:06:59.164 16:18:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:59.164 16:18:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:59.164 killing process with pid 62541 00:06:59.164 16:18:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62541' 00:06:59.164 16:18:32 -- common/autotest_common.sh@955 -- # kill 62541 00:06:59.164 16:18:32 -- common/autotest_common.sh@960 -- # wait 62541 00:06:59.422 00:06:59.422 real 0m4.227s 00:06:59.422 user 0m4.754s 00:06:59.422 sys 0m1.125s 00:06:59.422 16:18:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:59.422 16:18:33 -- common/autotest_common.sh@10 -- # set +x 00:06:59.422 ************************************ 00:06:59.422 END TEST non_locking_app_on_locked_coremask 00:06:59.422 ************************************ 00:06:59.422 16:18:33 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:59.422 16:18:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:59.422 16:18:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:59.422 16:18:33 -- common/autotest_common.sh@10 -- # set +x 00:06:59.680 ************************************ 00:06:59.680 START TEST locking_app_on_unlocked_coremask 00:06:59.680 ************************************ 00:06:59.680 16:18:33 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:06:59.680 16:18:33 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=62624 00:06:59.680 16:18:33 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:59.680 16:18:33 -- event/cpu_locks.sh@99 -- # waitforlisten 62624 /var/tmp/spdk.sock 00:06:59.680 16:18:33 -- common/autotest_common.sh@817 -- # '[' -z 62624 ']' 00:06:59.680 16:18:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.680 16:18:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:59.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.680 16:18:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.680 16:18:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:59.680 16:18:33 -- common/autotest_common.sh@10 -- # set +x 00:06:59.680 [2024-04-17 16:18:33.588037] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:06:59.681 [2024-04-17 16:18:33.588163] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62624 ] 00:06:59.939 [2024-04-17 16:18:33.724458] app.c: 818:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:59.939 [2024-04-17 16:18:33.724515] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.939 [2024-04-17 16:18:33.848218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.873 16:18:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:00.873 16:18:34 -- common/autotest_common.sh@850 -- # return 0 00:07:00.873 16:18:34 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=62652 00:07:00.873 16:18:34 -- event/cpu_locks.sh@103 -- # waitforlisten 62652 /var/tmp/spdk2.sock 00:07:00.873 16:18:34 -- common/autotest_common.sh@817 -- # '[' -z 62652 ']' 00:07:00.873 16:18:34 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:00.873 16:18:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:00.873 16:18:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:00.873 16:18:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:00.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:00.873 16:18:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:00.873 16:18:34 -- common/autotest_common.sh@10 -- # set +x 00:07:00.873 [2024-04-17 16:18:34.657084] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:07:00.873 [2024-04-17 16:18:34.657191] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62652 ] 00:07:00.873 [2024-04-17 16:18:34.804416] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.131 [2024-04-17 16:18:35.046064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.739 16:18:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:01.739 16:18:35 -- common/autotest_common.sh@850 -- # return 0 00:07:01.739 16:18:35 -- event/cpu_locks.sh@105 -- # locks_exist 62652 00:07:01.739 16:18:35 -- event/cpu_locks.sh@22 -- # lslocks -p 62652 00:07:01.739 16:18:35 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:02.672 16:18:36 -- event/cpu_locks.sh@107 -- # killprocess 62624 00:07:02.672 16:18:36 -- common/autotest_common.sh@936 -- # '[' -z 62624 ']' 00:07:02.672 16:18:36 -- common/autotest_common.sh@940 -- # kill -0 62624 00:07:02.672 16:18:36 -- common/autotest_common.sh@941 -- # uname 00:07:02.672 16:18:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:02.672 16:18:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62624 00:07:02.672 16:18:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:02.672 16:18:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:02.672 16:18:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62624' 00:07:02.672 killing process with pid 62624 00:07:02.673 16:18:36 -- common/autotest_common.sh@955 -- # kill 62624 00:07:02.673 16:18:36 -- common/autotest_common.sh@960 -- # wait 62624 00:07:03.239 16:18:37 -- event/cpu_locks.sh@108 -- # killprocess 62652 00:07:03.239 16:18:37 -- common/autotest_common.sh@936 -- # '[' -z 62652 ']' 00:07:03.239 16:18:37 -- common/autotest_common.sh@940 -- # kill -0 62652 00:07:03.239 16:18:37 -- common/autotest_common.sh@941 -- # uname 00:07:03.496 16:18:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:03.496 16:18:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62652 00:07:03.496 16:18:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:03.496 16:18:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:03.496 killing process with pid 62652 00:07:03.496 16:18:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62652' 00:07:03.496 16:18:37 -- common/autotest_common.sh@955 -- # kill 62652 00:07:03.496 16:18:37 -- common/autotest_common.sh@960 -- # wait 62652 00:07:03.752 00:07:03.752 real 0m4.217s 00:07:03.752 user 0m4.714s 00:07:03.752 sys 0m1.119s 00:07:03.752 16:18:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:03.752 16:18:37 -- common/autotest_common.sh@10 -- # set +x 00:07:03.752 ************************************ 00:07:03.752 END TEST locking_app_on_unlocked_coremask 00:07:03.752 ************************************ 00:07:03.752 16:18:37 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:03.752 16:18:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:03.752 16:18:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:03.752 16:18:37 -- common/autotest_common.sh@10 -- # set +x 00:07:04.009 ************************************ 00:07:04.009 START TEST locking_app_on_locked_coremask 00:07:04.009 ************************************ 00:07:04.009 16:18:37 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:07:04.009 16:18:37 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=62735 00:07:04.009 16:18:37 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:04.009 16:18:37 -- event/cpu_locks.sh@116 -- # waitforlisten 62735 /var/tmp/spdk.sock 00:07:04.009 16:18:37 -- common/autotest_common.sh@817 -- # '[' -z 62735 ']' 00:07:04.009 16:18:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.009 16:18:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:04.009 16:18:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.009 16:18:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:04.009 16:18:37 -- common/autotest_common.sh@10 -- # set +x 00:07:04.009 [2024-04-17 16:18:37.914281] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:07:04.009 [2024-04-17 16:18:37.914391] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62735 ] 00:07:04.009 [2024-04-17 16:18:38.052265] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.267 [2024-04-17 16:18:38.174054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.201 16:18:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:05.201 16:18:38 -- common/autotest_common.sh@850 -- # return 0 00:07:05.201 16:18:38 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=62763 00:07:05.201 16:18:38 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:05.201 16:18:38 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 62763 /var/tmp/spdk2.sock 00:07:05.201 16:18:38 -- common/autotest_common.sh@638 -- # local es=0 00:07:05.201 16:18:38 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 62763 /var/tmp/spdk2.sock 00:07:05.201 16:18:38 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:07:05.201 16:18:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:05.201 16:18:38 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:07:05.201 16:18:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:05.201 16:18:38 -- common/autotest_common.sh@641 -- # waitforlisten 62763 /var/tmp/spdk2.sock 00:07:05.201 16:18:38 -- common/autotest_common.sh@817 -- # '[' -z 62763 ']' 00:07:05.201 16:18:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:05.201 16:18:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:05.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:05.201 16:18:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:05.201 16:18:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:05.202 16:18:38 -- common/autotest_common.sh@10 -- # set +x 00:07:05.202 [2024-04-17 16:18:38.990448] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:07:05.202 [2024-04-17 16:18:38.990563] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62763 ] 00:07:05.202 [2024-04-17 16:18:39.138175] app.c: 688:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 62735 has claimed it. 00:07:05.202 [2024-04-17 16:18:39.138256] app.c: 814:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:05.767 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (62763) - No such process 00:07:05.767 ERROR: process (pid: 62763) is no longer running 00:07:05.767 16:18:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:05.767 16:18:39 -- common/autotest_common.sh@850 -- # return 1 00:07:05.767 16:18:39 -- common/autotest_common.sh@641 -- # es=1 00:07:05.767 16:18:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:05.767 16:18:39 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:05.767 16:18:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:05.767 16:18:39 -- event/cpu_locks.sh@122 -- # locks_exist 62735 00:07:05.767 16:18:39 -- event/cpu_locks.sh@22 -- # lslocks -p 62735 00:07:05.767 16:18:39 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:06.332 16:18:40 -- event/cpu_locks.sh@124 -- # killprocess 62735 00:07:06.332 16:18:40 -- common/autotest_common.sh@936 -- # '[' -z 62735 ']' 00:07:06.332 16:18:40 -- common/autotest_common.sh@940 -- # kill -0 62735 00:07:06.332 16:18:40 -- common/autotest_common.sh@941 -- # uname 00:07:06.332 16:18:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:06.332 16:18:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62735 00:07:06.332 16:18:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:06.332 16:18:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:06.332 killing process with pid 62735 00:07:06.332 16:18:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62735' 00:07:06.332 16:18:40 -- common/autotest_common.sh@955 -- # kill 62735 00:07:06.332 16:18:40 -- common/autotest_common.sh@960 -- # wait 62735 00:07:06.589 00:07:06.589 real 0m2.671s 00:07:06.589 user 0m3.125s 00:07:06.589 sys 0m0.620s 00:07:06.589 16:18:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:06.589 ************************************ 00:07:06.589 16:18:40 -- common/autotest_common.sh@10 -- # set +x 00:07:06.589 END TEST locking_app_on_locked_coremask 00:07:06.589 ************************************ 00:07:06.589 16:18:40 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:06.589 16:18:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:06.589 16:18:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:06.589 16:18:40 -- common/autotest_common.sh@10 -- # set +x 00:07:06.846 ************************************ 00:07:06.846 START TEST locking_overlapped_coremask 00:07:06.846 ************************************ 00:07:06.846 16:18:40 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:07:06.846 16:18:40 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=62824 00:07:06.846 16:18:40 -- event/cpu_locks.sh@133 -- # waitforlisten 62824 /var/tmp/spdk.sock 00:07:06.846 16:18:40 -- common/autotest_common.sh@817 -- # '[' -z 62824 ']' 00:07:06.846 16:18:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.846 16:18:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:06.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.846 16:18:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.846 16:18:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:06.846 16:18:40 -- common/autotest_common.sh@10 -- # set +x 00:07:06.846 16:18:40 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:06.846 [2024-04-17 16:18:40.711872] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:07:06.846 [2024-04-17 16:18:40.711999] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62824 ] 00:07:06.846 [2024-04-17 16:18:40.853680] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:07.104 [2024-04-17 16:18:40.973358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.104 [2024-04-17 16:18:40.973497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.104 [2024-04-17 16:18:40.973499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.696 16:18:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:07.696 16:18:41 -- common/autotest_common.sh@850 -- # return 0 00:07:07.696 16:18:41 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=62854 00:07:07.696 16:18:41 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:07.696 16:18:41 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 62854 /var/tmp/spdk2.sock 00:07:07.696 16:18:41 -- common/autotest_common.sh@638 -- # local es=0 00:07:07.696 16:18:41 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 62854 /var/tmp/spdk2.sock 00:07:07.696 16:18:41 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:07:07.696 16:18:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:07.696 16:18:41 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:07:07.696 16:18:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:07.696 16:18:41 -- common/autotest_common.sh@641 -- # waitforlisten 62854 /var/tmp/spdk2.sock 00:07:07.696 16:18:41 -- common/autotest_common.sh@817 -- # '[' -z 62854 ']' 00:07:07.696 16:18:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:07.696 16:18:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:07.696 16:18:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:07.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:07.696 16:18:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:07.696 16:18:41 -- common/autotest_common.sh@10 -- # set +x 00:07:07.953 [2024-04-17 16:18:41.763510] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:07:07.953 [2024-04-17 16:18:41.763617] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62854 ] 00:07:07.953 [2024-04-17 16:18:41.907644] app.c: 688:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 62824 has claimed it. 00:07:07.953 [2024-04-17 16:18:41.907741] app.c: 814:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:08.518 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (62854) - No such process 00:07:08.518 ERROR: process (pid: 62854) is no longer running 00:07:08.518 16:18:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:08.518 16:18:42 -- common/autotest_common.sh@850 -- # return 1 00:07:08.518 16:18:42 -- common/autotest_common.sh@641 -- # es=1 00:07:08.518 16:18:42 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:08.518 16:18:42 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:08.518 16:18:42 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:08.518 16:18:42 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:08.518 16:18:42 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:08.518 16:18:42 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:08.519 16:18:42 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:08.519 16:18:42 -- event/cpu_locks.sh@141 -- # killprocess 62824 00:07:08.519 16:18:42 -- common/autotest_common.sh@936 -- # '[' -z 62824 ']' 00:07:08.519 16:18:42 -- common/autotest_common.sh@940 -- # kill -0 62824 00:07:08.519 16:18:42 -- common/autotest_common.sh@941 -- # uname 00:07:08.519 16:18:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:08.519 16:18:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62824 00:07:08.519 16:18:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:08.519 16:18:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:08.519 killing process with pid 62824 00:07:08.519 16:18:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62824' 00:07:08.519 16:18:42 -- common/autotest_common.sh@955 -- # kill 62824 00:07:08.519 16:18:42 -- common/autotest_common.sh@960 -- # wait 62824 00:07:09.084 00:07:09.084 real 0m2.302s 00:07:09.084 user 0m6.364s 00:07:09.084 sys 0m0.413s 00:07:09.084 16:18:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:09.084 16:18:42 -- common/autotest_common.sh@10 -- # set +x 00:07:09.084 ************************************ 00:07:09.084 END TEST locking_overlapped_coremask 00:07:09.084 ************************************ 00:07:09.084 16:18:42 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:09.084 16:18:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:09.084 16:18:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:09.085 16:18:42 -- common/autotest_common.sh@10 -- # set +x 00:07:09.085 ************************************ 00:07:09.085 START TEST locking_overlapped_coremask_via_rpc 00:07:09.085 ************************************ 00:07:09.085 16:18:43 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:07:09.085 16:18:43 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=62904 00:07:09.085 16:18:43 -- event/cpu_locks.sh@149 -- # waitforlisten 62904 /var/tmp/spdk.sock 00:07:09.085 16:18:43 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:09.085 16:18:43 -- common/autotest_common.sh@817 -- # '[' -z 62904 ']' 00:07:09.085 16:18:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.085 16:18:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:09.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.085 16:18:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.085 16:18:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:09.085 16:18:43 -- common/autotest_common.sh@10 -- # set +x 00:07:09.342 [2024-04-17 16:18:43.129828] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:07:09.342 [2024-04-17 16:18:43.129919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62904 ] 00:07:09.342 [2024-04-17 16:18:43.260980] app.c: 818:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:09.342 [2024-04-17 16:18:43.261037] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:09.342 [2024-04-17 16:18:43.381988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.342 [2024-04-17 16:18:43.382124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.342 [2024-04-17 16:18:43.382128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.275 16:18:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:10.275 16:18:44 -- common/autotest_common.sh@850 -- # return 0 00:07:10.275 16:18:44 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=62934 00:07:10.275 16:18:44 -- event/cpu_locks.sh@153 -- # waitforlisten 62934 /var/tmp/spdk2.sock 00:07:10.275 16:18:44 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:10.275 16:18:44 -- common/autotest_common.sh@817 -- # '[' -z 62934 ']' 00:07:10.275 16:18:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:10.275 16:18:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:10.275 16:18:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:10.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:10.275 16:18:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:10.275 16:18:44 -- common/autotest_common.sh@10 -- # set +x 00:07:10.275 [2024-04-17 16:18:44.156737] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:07:10.275 [2024-04-17 16:18:44.156858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62934 ] 00:07:10.275 [2024-04-17 16:18:44.301438] app.c: 818:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:10.275 [2024-04-17 16:18:44.301504] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:10.533 [2024-04-17 16:18:44.565558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:10.533 [2024-04-17 16:18:44.568937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:10.533 [2024-04-17 16:18:44.568941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.467 16:18:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:11.467 16:18:45 -- common/autotest_common.sh@850 -- # return 0 00:07:11.467 16:18:45 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:11.467 16:18:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.467 16:18:45 -- common/autotest_common.sh@10 -- # set +x 00:07:11.467 16:18:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.467 16:18:45 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:11.467 16:18:45 -- common/autotest_common.sh@638 -- # local es=0 00:07:11.467 16:18:45 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:11.467 16:18:45 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:07:11.467 16:18:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:11.467 16:18:45 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:07:11.467 16:18:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:11.467 16:18:45 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:11.467 16:18:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.467 16:18:45 -- common/autotest_common.sh@10 -- # set +x 00:07:11.467 [2024-04-17 16:18:45.184957] app.c: 688:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 62904 has claimed it. 00:07:11.467 2024/04/17 16:18:45 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:07:11.467 request: 00:07:11.467 { 00:07:11.467 "method": "framework_enable_cpumask_locks", 00:07:11.467 "params": {} 00:07:11.467 } 00:07:11.467 Got JSON-RPC error response 00:07:11.467 GoRPCClient: error on JSON-RPC call 00:07:11.467 16:18:45 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:07:11.467 16:18:45 -- common/autotest_common.sh@641 -- # es=1 00:07:11.467 16:18:45 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:11.467 16:18:45 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:11.467 16:18:45 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:11.467 16:18:45 -- event/cpu_locks.sh@158 -- # waitforlisten 62904 /var/tmp/spdk.sock 00:07:11.467 16:18:45 -- common/autotest_common.sh@817 -- # '[' -z 62904 ']' 00:07:11.467 16:18:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.467 16:18:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:11.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.467 16:18:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.467 16:18:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:11.467 16:18:45 -- common/autotest_common.sh@10 -- # set +x 00:07:11.467 16:18:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:11.467 16:18:45 -- common/autotest_common.sh@850 -- # return 0 00:07:11.467 16:18:45 -- event/cpu_locks.sh@159 -- # waitforlisten 62934 /var/tmp/spdk2.sock 00:07:11.467 16:18:45 -- common/autotest_common.sh@817 -- # '[' -z 62934 ']' 00:07:11.467 16:18:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:11.467 16:18:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:11.467 16:18:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:11.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:11.467 16:18:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:11.467 16:18:45 -- common/autotest_common.sh@10 -- # set +x 00:07:11.726 16:18:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:11.726 16:18:45 -- common/autotest_common.sh@850 -- # return 0 00:07:11.726 16:18:45 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:11.726 16:18:45 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:11.726 16:18:45 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:11.726 16:18:45 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:11.726 00:07:11.726 real 0m2.684s 00:07:11.726 user 0m1.368s 00:07:11.726 sys 0m0.246s 00:07:11.726 16:18:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:11.726 ************************************ 00:07:11.726 END TEST locking_overlapped_coremask_via_rpc 00:07:11.726 16:18:45 -- common/autotest_common.sh@10 -- # set +x 00:07:11.726 ************************************ 00:07:11.984 16:18:45 -- event/cpu_locks.sh@174 -- # cleanup 00:07:11.984 16:18:45 -- event/cpu_locks.sh@15 -- # [[ -z 62904 ]] 00:07:11.984 16:18:45 -- event/cpu_locks.sh@15 -- # killprocess 62904 00:07:11.984 16:18:45 -- common/autotest_common.sh@936 -- # '[' -z 62904 ']' 00:07:11.984 16:18:45 -- common/autotest_common.sh@940 -- # kill -0 62904 00:07:11.984 16:18:45 -- common/autotest_common.sh@941 -- # uname 00:07:11.984 16:18:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:11.984 16:18:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62904 00:07:11.984 16:18:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:11.984 killing process with pid 62904 00:07:11.984 16:18:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:11.984 16:18:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62904' 00:07:11.984 16:18:45 -- common/autotest_common.sh@955 -- # kill 62904 00:07:11.984 16:18:45 -- common/autotest_common.sh@960 -- # wait 62904 00:07:12.549 16:18:46 -- event/cpu_locks.sh@16 -- # [[ -z 62934 ]] 00:07:12.549 16:18:46 -- event/cpu_locks.sh@16 -- # killprocess 62934 00:07:12.549 16:18:46 -- common/autotest_common.sh@936 -- # '[' -z 62934 ']' 00:07:12.549 16:18:46 -- common/autotest_common.sh@940 -- # kill -0 62934 00:07:12.549 16:18:46 -- common/autotest_common.sh@941 -- # uname 00:07:12.549 16:18:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:12.549 16:18:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62934 00:07:12.549 16:18:46 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:07:12.549 16:18:46 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:07:12.549 killing process with pid 62934 00:07:12.549 16:18:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62934' 00:07:12.549 16:18:46 -- common/autotest_common.sh@955 -- # kill 62934 00:07:12.549 16:18:46 -- common/autotest_common.sh@960 -- # wait 62934 00:07:12.807 16:18:46 -- event/cpu_locks.sh@18 -- # rm -f 00:07:12.807 16:18:46 -- event/cpu_locks.sh@1 -- # cleanup 00:07:12.807 16:18:46 -- event/cpu_locks.sh@15 -- # [[ -z 62904 ]] 00:07:12.807 16:18:46 -- event/cpu_locks.sh@15 -- # killprocess 62904 00:07:12.807 16:18:46 -- common/autotest_common.sh@936 -- # '[' -z 62904 ']' 00:07:12.807 16:18:46 -- common/autotest_common.sh@940 -- # kill -0 62904 00:07:12.807 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (62904) - No such process 00:07:12.807 16:18:46 -- common/autotest_common.sh@963 -- # echo 'Process with pid 62904 is not found' 00:07:12.807 Process with pid 62904 is not found 00:07:12.807 16:18:46 -- event/cpu_locks.sh@16 -- # [[ -z 62934 ]] 00:07:12.807 16:18:46 -- event/cpu_locks.sh@16 -- # killprocess 62934 00:07:12.807 16:18:46 -- common/autotest_common.sh@936 -- # '[' -z 62934 ']' 00:07:12.807 16:18:46 -- common/autotest_common.sh@940 -- # kill -0 62934 00:07:12.807 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (62934) - No such process 00:07:12.807 16:18:46 -- common/autotest_common.sh@963 -- # echo 'Process with pid 62934 is not found' 00:07:12.807 Process with pid 62934 is not found 00:07:12.807 16:18:46 -- event/cpu_locks.sh@18 -- # rm -f 00:07:12.807 00:07:12.807 real 0m21.832s 00:07:12.807 user 0m37.572s 00:07:12.807 sys 0m5.728s 00:07:12.807 16:18:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:12.807 16:18:46 -- common/autotest_common.sh@10 -- # set +x 00:07:12.807 ************************************ 00:07:12.807 END TEST cpu_locks 00:07:12.807 ************************************ 00:07:12.807 00:07:12.807 real 0m51.200s 00:07:12.807 user 1m37.569s 00:07:12.807 sys 0m9.810s 00:07:12.807 16:18:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:12.807 16:18:46 -- common/autotest_common.sh@10 -- # set +x 00:07:12.807 ************************************ 00:07:12.807 END TEST event 00:07:12.807 ************************************ 00:07:12.807 16:18:46 -- spdk/autotest.sh@177 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:12.807 16:18:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:12.807 16:18:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:12.807 16:18:46 -- common/autotest_common.sh@10 -- # set +x 00:07:13.065 ************************************ 00:07:13.065 START TEST thread 00:07:13.065 ************************************ 00:07:13.065 16:18:46 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:13.065 * Looking for test storage... 00:07:13.065 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:13.065 16:18:46 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:13.065 16:18:46 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:13.065 16:18:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:13.065 16:18:46 -- common/autotest_common.sh@10 -- # set +x 00:07:13.065 ************************************ 00:07:13.065 START TEST thread_poller_perf 00:07:13.065 ************************************ 00:07:13.065 16:18:47 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:13.065 [2024-04-17 16:18:47.043668] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:07:13.065 [2024-04-17 16:18:47.043801] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63095 ] 00:07:13.323 [2024-04-17 16:18:47.181482] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.323 [2024-04-17 16:18:47.323603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.323 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:14.698 ====================================== 00:07:14.698 busy:2209259319 (cyc) 00:07:14.698 total_run_count: 284000 00:07:14.698 tsc_hz: 2200000000 (cyc) 00:07:14.698 ====================================== 00:07:14.698 poller_cost: 7779 (cyc), 3535 (nsec) 00:07:14.698 00:07:14.698 real 0m1.435s 00:07:14.698 user 0m1.260s 00:07:14.698 sys 0m0.063s 00:07:14.698 16:18:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:14.698 16:18:48 -- common/autotest_common.sh@10 -- # set +x 00:07:14.698 ************************************ 00:07:14.698 END TEST thread_poller_perf 00:07:14.698 ************************************ 00:07:14.698 16:18:48 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:14.698 16:18:48 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:14.698 16:18:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:14.698 16:18:48 -- common/autotest_common.sh@10 -- # set +x 00:07:14.698 ************************************ 00:07:14.698 START TEST thread_poller_perf 00:07:14.698 ************************************ 00:07:14.698 16:18:48 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:14.698 [2024-04-17 16:18:48.568557] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:07:14.698 [2024-04-17 16:18:48.568659] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63136 ] 00:07:14.698 [2024-04-17 16:18:48.700525] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.955 [2024-04-17 16:18:48.824072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.955 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:16.330 ====================================== 00:07:16.330 busy:2202655589 (cyc) 00:07:16.330 total_run_count: 3931000 00:07:16.330 tsc_hz: 2200000000 (cyc) 00:07:16.330 ====================================== 00:07:16.330 poller_cost: 560 (cyc), 254 (nsec) 00:07:16.330 00:07:16.330 real 0m1.400s 00:07:16.330 user 0m1.232s 00:07:16.330 sys 0m0.057s 00:07:16.330 16:18:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:16.330 16:18:49 -- common/autotest_common.sh@10 -- # set +x 00:07:16.330 ************************************ 00:07:16.330 END TEST thread_poller_perf 00:07:16.330 ************************************ 00:07:16.330 16:18:49 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:16.330 00:07:16.330 real 0m3.082s 00:07:16.330 user 0m2.580s 00:07:16.330 sys 0m0.261s 00:07:16.330 16:18:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:16.330 16:18:49 -- common/autotest_common.sh@10 -- # set +x 00:07:16.330 ************************************ 00:07:16.330 END TEST thread 00:07:16.330 ************************************ 00:07:16.330 16:18:50 -- spdk/autotest.sh@178 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:16.330 16:18:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:16.330 16:18:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:16.330 16:18:50 -- common/autotest_common.sh@10 -- # set +x 00:07:16.330 ************************************ 00:07:16.330 START TEST accel 00:07:16.330 ************************************ 00:07:16.330 16:18:50 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:16.330 * Looking for test storage... 00:07:16.330 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:16.330 16:18:50 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:16.330 16:18:50 -- accel/accel.sh@82 -- # get_expected_opcs 00:07:16.330 16:18:50 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:16.330 16:18:50 -- accel/accel.sh@62 -- # spdk_tgt_pid=63216 00:07:16.330 16:18:50 -- accel/accel.sh@63 -- # waitforlisten 63216 00:07:16.330 16:18:50 -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:16.330 16:18:50 -- accel/accel.sh@61 -- # build_accel_config 00:07:16.330 16:18:50 -- common/autotest_common.sh@817 -- # '[' -z 63216 ']' 00:07:16.330 16:18:50 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.330 16:18:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.330 16:18:50 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.330 16:18:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:16.330 16:18:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.330 16:18:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.330 16:18:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.330 16:18:50 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.330 16:18:50 -- accel/accel.sh@40 -- # local IFS=, 00:07:16.330 16:18:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:16.330 16:18:50 -- accel/accel.sh@41 -- # jq -r . 00:07:16.330 16:18:50 -- common/autotest_common.sh@10 -- # set +x 00:07:16.330 [2024-04-17 16:18:50.188614] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:07:16.330 [2024-04-17 16:18:50.188707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63216 ] 00:07:16.330 [2024-04-17 16:18:50.318824] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.588 [2024-04-17 16:18:50.489769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.154 16:18:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:17.154 16:18:51 -- common/autotest_common.sh@850 -- # return 0 00:07:17.154 16:18:51 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:17.154 16:18:51 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:17.154 16:18:51 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:17.154 16:18:51 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:17.154 16:18:51 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:17.154 16:18:51 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:17.154 16:18:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.154 16:18:51 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:17.154 16:18:51 -- common/autotest_common.sh@10 -- # set +x 00:07:17.154 16:18:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.154 16:18:51 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:17.154 16:18:51 -- accel/accel.sh@72 -- # IFS== 00:07:17.154 16:18:51 -- accel/accel.sh@72 -- # read -r opc module 00:07:17.154 16:18:51 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:17.154 16:18:51 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:17.154 16:18:51 -- accel/accel.sh@72 -- # IFS== 00:07:17.154 16:18:51 -- accel/accel.sh@72 -- # read -r opc module 00:07:17.154 16:18:51 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:17.154 16:18:51 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:17.154 16:18:51 -- accel/accel.sh@72 -- # IFS== 00:07:17.154 16:18:51 -- accel/accel.sh@72 -- # read -r opc module 00:07:17.154 16:18:51 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:17.154 16:18:51 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:17.154 16:18:51 -- accel/accel.sh@72 -- # IFS== 00:07:17.154 16:18:51 -- accel/accel.sh@72 -- # read -r opc module 00:07:17.154 16:18:51 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:17.154 16:18:51 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:17.154 16:18:51 -- accel/accel.sh@72 -- # IFS== 00:07:17.154 16:18:51 -- accel/accel.sh@72 -- # read -r opc module 00:07:17.154 16:18:51 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:17.154 16:18:51 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:17.154 16:18:51 -- accel/accel.sh@72 -- # IFS== 00:07:17.154 16:18:51 -- accel/accel.sh@72 -- # read -r opc module 00:07:17.154 16:18:51 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:17.154 16:18:51 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:17.154 16:18:51 -- accel/accel.sh@72 -- # IFS== 00:07:17.154 16:18:51 -- accel/accel.sh@72 -- # read -r opc module 00:07:17.154 16:18:51 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:17.154 16:18:51 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:17.154 16:18:51 -- accel/accel.sh@72 -- # IFS== 00:07:17.154 16:18:51 -- accel/accel.sh@72 -- # read -r opc module 00:07:17.154 16:18:51 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:17.154 16:18:51 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:17.154 16:18:51 -- accel/accel.sh@72 -- # IFS== 00:07:17.154 16:18:51 -- accel/accel.sh@72 -- # read -r opc module 00:07:17.154 16:18:51 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:17.154 16:18:51 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:17.154 16:18:51 -- accel/accel.sh@72 -- # IFS== 00:07:17.154 16:18:51 -- accel/accel.sh@72 -- # read -r opc module 00:07:17.154 16:18:51 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:17.154 16:18:51 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:17.154 16:18:51 -- accel/accel.sh@72 -- # IFS== 00:07:17.154 16:18:51 -- accel/accel.sh@72 -- # read -r opc module 00:07:17.154 16:18:51 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:17.154 16:18:51 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:17.154 16:18:51 -- accel/accel.sh@72 -- # IFS== 00:07:17.414 16:18:51 -- accel/accel.sh@72 -- # read -r opc module 00:07:17.414 16:18:51 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:17.414 16:18:51 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:17.414 16:18:51 -- accel/accel.sh@72 -- # IFS== 00:07:17.414 16:18:51 -- accel/accel.sh@72 -- # read -r opc module 00:07:17.414 16:18:51 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:17.414 16:18:51 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:17.414 16:18:51 -- accel/accel.sh@72 -- # IFS== 00:07:17.414 16:18:51 -- accel/accel.sh@72 -- # read -r opc module 00:07:17.414 16:18:51 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:17.414 16:18:51 -- accel/accel.sh@75 -- # killprocess 63216 00:07:17.414 16:18:51 -- common/autotest_common.sh@936 -- # '[' -z 63216 ']' 00:07:17.414 16:18:51 -- common/autotest_common.sh@940 -- # kill -0 63216 00:07:17.414 16:18:51 -- common/autotest_common.sh@941 -- # uname 00:07:17.414 16:18:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:17.414 16:18:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63216 00:07:17.414 16:18:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:17.414 killing process with pid 63216 00:07:17.414 16:18:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:17.414 16:18:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63216' 00:07:17.414 16:18:51 -- common/autotest_common.sh@955 -- # kill 63216 00:07:17.414 16:18:51 -- common/autotest_common.sh@960 -- # wait 63216 00:07:17.981 16:18:51 -- accel/accel.sh@76 -- # trap - ERR 00:07:17.981 16:18:51 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:17.981 16:18:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:17.981 16:18:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.981 16:18:51 -- common/autotest_common.sh@10 -- # set +x 00:07:17.981 16:18:51 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:07:17.981 16:18:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:17.981 16:18:51 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.981 16:18:51 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.981 16:18:51 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.981 16:18:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.981 16:18:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.981 16:18:51 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.981 16:18:51 -- accel/accel.sh@40 -- # local IFS=, 00:07:17.981 16:18:51 -- accel/accel.sh@41 -- # jq -r . 00:07:17.981 16:18:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:17.981 16:18:52 -- common/autotest_common.sh@10 -- # set +x 00:07:18.240 16:18:52 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:18.240 16:18:52 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:18.240 16:18:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:18.240 16:18:52 -- common/autotest_common.sh@10 -- # set +x 00:07:18.240 ************************************ 00:07:18.240 START TEST accel_missing_filename 00:07:18.240 ************************************ 00:07:18.240 16:18:52 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:07:18.240 16:18:52 -- common/autotest_common.sh@638 -- # local es=0 00:07:18.240 16:18:52 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:18.240 16:18:52 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:07:18.240 16:18:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:18.240 16:18:52 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:07:18.240 16:18:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:18.240 16:18:52 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:07:18.240 16:18:52 -- accel/accel.sh@12 -- # build_accel_config 00:07:18.240 16:18:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:18.240 16:18:52 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:18.240 16:18:52 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:18.240 16:18:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.240 16:18:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.240 16:18:52 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:18.240 16:18:52 -- accel/accel.sh@40 -- # local IFS=, 00:07:18.240 16:18:52 -- accel/accel.sh@41 -- # jq -r . 00:07:18.240 [2024-04-17 16:18:52.146155] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:07:18.240 [2024-04-17 16:18:52.146257] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63300 ] 00:07:18.240 [2024-04-17 16:18:52.278210] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.499 [2024-04-17 16:18:52.424863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.499 [2024-04-17 16:18:52.481304] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:18.758 [2024-04-17 16:18:52.557127] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:07:18.758 A filename is required. 00:07:18.758 ************************************ 00:07:18.758 END TEST accel_missing_filename 00:07:18.758 ************************************ 00:07:18.758 16:18:52 -- common/autotest_common.sh@641 -- # es=234 00:07:18.758 16:18:52 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:18.758 16:18:52 -- common/autotest_common.sh@650 -- # es=106 00:07:18.758 16:18:52 -- common/autotest_common.sh@651 -- # case "$es" in 00:07:18.758 16:18:52 -- common/autotest_common.sh@658 -- # es=1 00:07:18.758 16:18:52 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:18.758 00:07:18.758 real 0m0.561s 00:07:18.758 user 0m0.387s 00:07:18.758 sys 0m0.114s 00:07:18.758 16:18:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:18.758 16:18:52 -- common/autotest_common.sh@10 -- # set +x 00:07:18.758 16:18:52 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:18.758 16:18:52 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:07:18.758 16:18:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:18.758 16:18:52 -- common/autotest_common.sh@10 -- # set +x 00:07:18.758 ************************************ 00:07:18.758 START TEST accel_compress_verify 00:07:18.758 ************************************ 00:07:18.758 16:18:52 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:18.758 16:18:52 -- common/autotest_common.sh@638 -- # local es=0 00:07:18.758 16:18:52 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:18.758 16:18:52 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:07:18.758 16:18:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:18.758 16:18:52 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:07:18.758 16:18:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:18.758 16:18:52 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:18.758 16:18:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:18.758 16:18:52 -- accel/accel.sh@12 -- # build_accel_config 00:07:18.758 16:18:52 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:18.758 16:18:52 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:18.758 16:18:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.758 16:18:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.017 16:18:52 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.017 16:18:52 -- accel/accel.sh@40 -- # local IFS=, 00:07:19.017 16:18:52 -- accel/accel.sh@41 -- # jq -r . 00:07:19.017 [2024-04-17 16:18:52.817021] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:07:19.017 [2024-04-17 16:18:52.817300] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63329 ] 00:07:19.017 [2024-04-17 16:18:52.950583] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.276 [2024-04-17 16:18:53.074070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.276 [2024-04-17 16:18:53.130807] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:19.276 [2024-04-17 16:18:53.206940] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:07:19.535 00:07:19.535 Compression does not support the verify option, aborting. 00:07:19.535 16:18:53 -- common/autotest_common.sh@641 -- # es=161 00:07:19.535 16:18:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:19.535 16:18:53 -- common/autotest_common.sh@650 -- # es=33 00:07:19.535 16:18:53 -- common/autotest_common.sh@651 -- # case "$es" in 00:07:19.535 16:18:53 -- common/autotest_common.sh@658 -- # es=1 00:07:19.535 16:18:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:19.535 00:07:19.535 real 0m0.534s 00:07:19.535 user 0m0.365s 00:07:19.535 sys 0m0.111s 00:07:19.535 16:18:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:19.535 16:18:53 -- common/autotest_common.sh@10 -- # set +x 00:07:19.535 ************************************ 00:07:19.535 END TEST accel_compress_verify 00:07:19.535 ************************************ 00:07:19.535 16:18:53 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:19.535 16:18:53 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:19.535 16:18:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:19.535 16:18:53 -- common/autotest_common.sh@10 -- # set +x 00:07:19.535 ************************************ 00:07:19.535 START TEST accel_wrong_workload 00:07:19.535 ************************************ 00:07:19.535 16:18:53 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:07:19.535 16:18:53 -- common/autotest_common.sh@638 -- # local es=0 00:07:19.535 16:18:53 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:19.535 16:18:53 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:07:19.535 16:18:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:19.535 16:18:53 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:07:19.535 16:18:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:19.535 16:18:53 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:07:19.535 16:18:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:19.535 16:18:53 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.535 16:18:53 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.535 16:18:53 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.535 16:18:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.535 16:18:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.535 16:18:53 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.535 16:18:53 -- accel/accel.sh@40 -- # local IFS=, 00:07:19.535 16:18:53 -- accel/accel.sh@41 -- # jq -r . 00:07:19.535 Unsupported workload type: foobar 00:07:19.535 [2024-04-17 16:18:53.467831] app.c:1339:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:19.535 accel_perf options: 00:07:19.535 [-h help message] 00:07:19.535 [-q queue depth per core] 00:07:19.535 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:19.535 [-T number of threads per core 00:07:19.535 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:19.535 [-t time in seconds] 00:07:19.535 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:19.535 [ dif_verify, , dif_generate, dif_generate_copy 00:07:19.535 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:19.535 [-l for compress/decompress workloads, name of uncompressed input file 00:07:19.536 [-S for crc32c workload, use this seed value (default 0) 00:07:19.536 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:19.536 [-f for fill workload, use this BYTE value (default 255) 00:07:19.536 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:19.536 [-y verify result if this switch is on] 00:07:19.536 [-a tasks to allocate per core (default: same value as -q)] 00:07:19.536 Can be used to spread operations across a wider range of memory. 00:07:19.536 16:18:53 -- common/autotest_common.sh@641 -- # es=1 00:07:19.536 16:18:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:19.536 16:18:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:19.536 16:18:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:19.536 00:07:19.536 real 0m0.030s 00:07:19.536 user 0m0.019s 00:07:19.536 sys 0m0.011s 00:07:19.536 16:18:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:19.536 16:18:53 -- common/autotest_common.sh@10 -- # set +x 00:07:19.536 ************************************ 00:07:19.536 END TEST accel_wrong_workload 00:07:19.536 ************************************ 00:07:19.536 16:18:53 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:19.536 16:18:53 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:07:19.536 16:18:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:19.536 16:18:53 -- common/autotest_common.sh@10 -- # set +x 00:07:19.794 ************************************ 00:07:19.794 START TEST accel_negative_buffers 00:07:19.794 ************************************ 00:07:19.794 16:18:53 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:19.794 16:18:53 -- common/autotest_common.sh@638 -- # local es=0 00:07:19.794 16:18:53 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:19.794 16:18:53 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:07:19.794 16:18:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:19.794 16:18:53 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:07:19.794 16:18:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:19.794 16:18:53 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:07:19.794 16:18:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:19.794 16:18:53 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.794 16:18:53 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.794 16:18:53 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.794 16:18:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.794 16:18:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.794 16:18:53 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.794 16:18:53 -- accel/accel.sh@40 -- # local IFS=, 00:07:19.794 16:18:53 -- accel/accel.sh@41 -- # jq -r . 00:07:19.794 -x option must be non-negative. 00:07:19.795 [2024-04-17 16:18:53.615493] app.c:1339:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:19.795 accel_perf options: 00:07:19.795 [-h help message] 00:07:19.795 [-q queue depth per core] 00:07:19.795 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:19.795 [-T number of threads per core 00:07:19.795 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:19.795 [-t time in seconds] 00:07:19.795 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:19.795 [ dif_verify, , dif_generate, dif_generate_copy 00:07:19.795 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:19.795 [-l for compress/decompress workloads, name of uncompressed input file 00:07:19.795 [-S for crc32c workload, use this seed value (default 0) 00:07:19.795 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:19.795 [-f for fill workload, use this BYTE value (default 255) 00:07:19.795 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:19.795 [-y verify result if this switch is on] 00:07:19.795 [-a tasks to allocate per core (default: same value as -q)] 00:07:19.795 Can be used to spread operations across a wider range of memory. 00:07:19.795 16:18:53 -- common/autotest_common.sh@641 -- # es=1 00:07:19.795 16:18:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:19.795 16:18:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:19.795 16:18:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:19.795 00:07:19.795 real 0m0.032s 00:07:19.795 user 0m0.019s 00:07:19.795 sys 0m0.013s 00:07:19.795 16:18:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:19.795 16:18:53 -- common/autotest_common.sh@10 -- # set +x 00:07:19.795 ************************************ 00:07:19.795 END TEST accel_negative_buffers 00:07:19.795 ************************************ 00:07:19.795 16:18:53 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:19.795 16:18:53 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:19.795 16:18:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:19.795 16:18:53 -- common/autotest_common.sh@10 -- # set +x 00:07:19.795 ************************************ 00:07:19.795 START TEST accel_crc32c 00:07:19.795 ************************************ 00:07:19.795 16:18:53 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:19.795 16:18:53 -- accel/accel.sh@16 -- # local accel_opc 00:07:19.795 16:18:53 -- accel/accel.sh@17 -- # local accel_module 00:07:19.795 16:18:53 -- accel/accel.sh@19 -- # IFS=: 00:07:19.795 16:18:53 -- accel/accel.sh@19 -- # read -r var val 00:07:19.795 16:18:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:19.795 16:18:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:19.795 16:18:53 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.795 16:18:53 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.795 16:18:53 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.795 16:18:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.795 16:18:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.795 16:18:53 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.795 16:18:53 -- accel/accel.sh@40 -- # local IFS=, 00:07:19.795 16:18:53 -- accel/accel.sh@41 -- # jq -r . 00:07:19.795 [2024-04-17 16:18:53.760700] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:07:19.795 [2024-04-17 16:18:53.760811] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63405 ] 00:07:20.053 [2024-04-17 16:18:53.897164] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.053 [2024-04-17 16:18:54.030902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.053 16:18:54 -- accel/accel.sh@20 -- # val= 00:07:20.053 16:18:54 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.053 16:18:54 -- accel/accel.sh@19 -- # IFS=: 00:07:20.053 16:18:54 -- accel/accel.sh@19 -- # read -r var val 00:07:20.053 16:18:54 -- accel/accel.sh@20 -- # val= 00:07:20.053 16:18:54 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.053 16:18:54 -- accel/accel.sh@19 -- # IFS=: 00:07:20.053 16:18:54 -- accel/accel.sh@19 -- # read -r var val 00:07:20.053 16:18:54 -- accel/accel.sh@20 -- # val=0x1 00:07:20.053 16:18:54 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.053 16:18:54 -- accel/accel.sh@19 -- # IFS=: 00:07:20.053 16:18:54 -- accel/accel.sh@19 -- # read -r var val 00:07:20.053 16:18:54 -- accel/accel.sh@20 -- # val= 00:07:20.053 16:18:54 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.053 16:18:54 -- accel/accel.sh@19 -- # IFS=: 00:07:20.053 16:18:54 -- accel/accel.sh@19 -- # read -r var val 00:07:20.053 16:18:54 -- accel/accel.sh@20 -- # val= 00:07:20.053 16:18:54 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.053 16:18:54 -- accel/accel.sh@19 -- # IFS=: 00:07:20.053 16:18:54 -- accel/accel.sh@19 -- # read -r var val 00:07:20.053 16:18:54 -- accel/accel.sh@20 -- # val=crc32c 00:07:20.053 16:18:54 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.053 16:18:54 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:20.053 16:18:54 -- accel/accel.sh@19 -- # IFS=: 00:07:20.053 16:18:54 -- accel/accel.sh@19 -- # read -r var val 00:07:20.053 16:18:54 -- accel/accel.sh@20 -- # val=32 00:07:20.053 16:18:54 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.053 16:18:54 -- accel/accel.sh@19 -- # IFS=: 00:07:20.329 16:18:54 -- accel/accel.sh@19 -- # read -r var val 00:07:20.329 16:18:54 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:20.329 16:18:54 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.329 16:18:54 -- accel/accel.sh@19 -- # IFS=: 00:07:20.329 16:18:54 -- accel/accel.sh@19 -- # read -r var val 00:07:20.329 16:18:54 -- accel/accel.sh@20 -- # val= 00:07:20.329 16:18:54 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.329 16:18:54 -- accel/accel.sh@19 -- # IFS=: 00:07:20.329 16:18:54 -- accel/accel.sh@19 -- # read -r var val 00:07:20.329 16:18:54 -- accel/accel.sh@20 -- # val=software 00:07:20.329 16:18:54 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.329 16:18:54 -- accel/accel.sh@22 -- # accel_module=software 00:07:20.329 16:18:54 -- accel/accel.sh@19 -- # IFS=: 00:07:20.329 16:18:54 -- accel/accel.sh@19 -- # read -r var val 00:07:20.329 16:18:54 -- accel/accel.sh@20 -- # val=32 00:07:20.329 16:18:54 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.329 16:18:54 -- accel/accel.sh@19 -- # IFS=: 00:07:20.329 16:18:54 -- accel/accel.sh@19 -- # read -r var val 00:07:20.329 16:18:54 -- accel/accel.sh@20 -- # val=32 00:07:20.329 16:18:54 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.329 16:18:54 -- accel/accel.sh@19 -- # IFS=: 00:07:20.329 16:18:54 -- accel/accel.sh@19 -- # read -r var val 00:07:20.329 16:18:54 -- accel/accel.sh@20 -- # val=1 00:07:20.329 16:18:54 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.329 16:18:54 -- accel/accel.sh@19 -- # IFS=: 00:07:20.329 16:18:54 -- accel/accel.sh@19 -- # read -r var val 00:07:20.329 16:18:54 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:20.329 16:18:54 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.329 16:18:54 -- accel/accel.sh@19 -- # IFS=: 00:07:20.329 16:18:54 -- accel/accel.sh@19 -- # read -r var val 00:07:20.329 16:18:54 -- accel/accel.sh@20 -- # val=Yes 00:07:20.329 16:18:54 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.329 16:18:54 -- accel/accel.sh@19 -- # IFS=: 00:07:20.329 16:18:54 -- accel/accel.sh@19 -- # read -r var val 00:07:20.329 16:18:54 -- accel/accel.sh@20 -- # val= 00:07:20.329 16:18:54 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.329 16:18:54 -- accel/accel.sh@19 -- # IFS=: 00:07:20.329 16:18:54 -- accel/accel.sh@19 -- # read -r var val 00:07:20.329 16:18:54 -- accel/accel.sh@20 -- # val= 00:07:20.329 16:18:54 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.329 16:18:54 -- accel/accel.sh@19 -- # IFS=: 00:07:20.329 16:18:54 -- accel/accel.sh@19 -- # read -r var val 00:07:21.269 16:18:55 -- accel/accel.sh@20 -- # val= 00:07:21.269 16:18:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.269 16:18:55 -- accel/accel.sh@19 -- # IFS=: 00:07:21.269 16:18:55 -- accel/accel.sh@19 -- # read -r var val 00:07:21.269 16:18:55 -- accel/accel.sh@20 -- # val= 00:07:21.269 16:18:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.269 16:18:55 -- accel/accel.sh@19 -- # IFS=: 00:07:21.269 16:18:55 -- accel/accel.sh@19 -- # read -r var val 00:07:21.269 16:18:55 -- accel/accel.sh@20 -- # val= 00:07:21.269 16:18:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.269 16:18:55 -- accel/accel.sh@19 -- # IFS=: 00:07:21.269 16:18:55 -- accel/accel.sh@19 -- # read -r var val 00:07:21.269 16:18:55 -- accel/accel.sh@20 -- # val= 00:07:21.269 16:18:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.269 16:18:55 -- accel/accel.sh@19 -- # IFS=: 00:07:21.269 16:18:55 -- accel/accel.sh@19 -- # read -r var val 00:07:21.269 16:18:55 -- accel/accel.sh@20 -- # val= 00:07:21.269 16:18:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.269 16:18:55 -- accel/accel.sh@19 -- # IFS=: 00:07:21.269 16:18:55 -- accel/accel.sh@19 -- # read -r var val 00:07:21.269 16:18:55 -- accel/accel.sh@20 -- # val= 00:07:21.269 16:18:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.269 16:18:55 -- accel/accel.sh@19 -- # IFS=: 00:07:21.269 16:18:55 -- accel/accel.sh@19 -- # read -r var val 00:07:21.269 16:18:55 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:21.269 16:18:55 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:21.269 16:18:55 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.269 00:07:21.269 real 0m1.558s 00:07:21.269 user 0m1.350s 00:07:21.269 sys 0m0.113s 00:07:21.269 16:18:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:21.269 16:18:55 -- common/autotest_common.sh@10 -- # set +x 00:07:21.269 ************************************ 00:07:21.269 END TEST accel_crc32c 00:07:21.269 ************************************ 00:07:21.528 16:18:55 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:21.528 16:18:55 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:21.528 16:18:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:21.528 16:18:55 -- common/autotest_common.sh@10 -- # set +x 00:07:21.528 ************************************ 00:07:21.528 START TEST accel_crc32c_C2 00:07:21.528 ************************************ 00:07:21.528 16:18:55 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:21.528 16:18:55 -- accel/accel.sh@16 -- # local accel_opc 00:07:21.528 16:18:55 -- accel/accel.sh@17 -- # local accel_module 00:07:21.528 16:18:55 -- accel/accel.sh@19 -- # IFS=: 00:07:21.528 16:18:55 -- accel/accel.sh@19 -- # read -r var val 00:07:21.528 16:18:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:21.528 16:18:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:21.528 16:18:55 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.528 16:18:55 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.528 16:18:55 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.528 16:18:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.528 16:18:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.528 16:18:55 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.528 16:18:55 -- accel/accel.sh@40 -- # local IFS=, 00:07:21.528 16:18:55 -- accel/accel.sh@41 -- # jq -r . 00:07:21.528 [2024-04-17 16:18:55.440010] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:07:21.528 [2024-04-17 16:18:55.440110] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63449 ] 00:07:21.786 [2024-04-17 16:18:55.580503] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.786 [2024-04-17 16:18:55.710411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.786 16:18:55 -- accel/accel.sh@20 -- # val= 00:07:21.786 16:18:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.786 16:18:55 -- accel/accel.sh@19 -- # IFS=: 00:07:21.786 16:18:55 -- accel/accel.sh@19 -- # read -r var val 00:07:21.786 16:18:55 -- accel/accel.sh@20 -- # val= 00:07:21.786 16:18:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.786 16:18:55 -- accel/accel.sh@19 -- # IFS=: 00:07:21.786 16:18:55 -- accel/accel.sh@19 -- # read -r var val 00:07:21.786 16:18:55 -- accel/accel.sh@20 -- # val=0x1 00:07:21.786 16:18:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.786 16:18:55 -- accel/accel.sh@19 -- # IFS=: 00:07:21.786 16:18:55 -- accel/accel.sh@19 -- # read -r var val 00:07:21.786 16:18:55 -- accel/accel.sh@20 -- # val= 00:07:21.786 16:18:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.786 16:18:55 -- accel/accel.sh@19 -- # IFS=: 00:07:21.786 16:18:55 -- accel/accel.sh@19 -- # read -r var val 00:07:21.786 16:18:55 -- accel/accel.sh@20 -- # val= 00:07:21.786 16:18:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.786 16:18:55 -- accel/accel.sh@19 -- # IFS=: 00:07:21.786 16:18:55 -- accel/accel.sh@19 -- # read -r var val 00:07:21.786 16:18:55 -- accel/accel.sh@20 -- # val=crc32c 00:07:21.786 16:18:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.786 16:18:55 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:21.786 16:18:55 -- accel/accel.sh@19 -- # IFS=: 00:07:21.786 16:18:55 -- accel/accel.sh@19 -- # read -r var val 00:07:21.786 16:18:55 -- accel/accel.sh@20 -- # val=0 00:07:21.786 16:18:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.786 16:18:55 -- accel/accel.sh@19 -- # IFS=: 00:07:21.786 16:18:55 -- accel/accel.sh@19 -- # read -r var val 00:07:21.786 16:18:55 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:21.786 16:18:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.786 16:18:55 -- accel/accel.sh@19 -- # IFS=: 00:07:21.786 16:18:55 -- accel/accel.sh@19 -- # read -r var val 00:07:21.786 16:18:55 -- accel/accel.sh@20 -- # val= 00:07:21.786 16:18:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.786 16:18:55 -- accel/accel.sh@19 -- # IFS=: 00:07:21.786 16:18:55 -- accel/accel.sh@19 -- # read -r var val 00:07:21.786 16:18:55 -- accel/accel.sh@20 -- # val=software 00:07:21.786 16:18:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.786 16:18:55 -- accel/accel.sh@22 -- # accel_module=software 00:07:21.786 16:18:55 -- accel/accel.sh@19 -- # IFS=: 00:07:21.786 16:18:55 -- accel/accel.sh@19 -- # read -r var val 00:07:21.786 16:18:55 -- accel/accel.sh@20 -- # val=32 00:07:21.786 16:18:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.786 16:18:55 -- accel/accel.sh@19 -- # IFS=: 00:07:21.786 16:18:55 -- accel/accel.sh@19 -- # read -r var val 00:07:21.786 16:18:55 -- accel/accel.sh@20 -- # val=32 00:07:21.786 16:18:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.786 16:18:55 -- accel/accel.sh@19 -- # IFS=: 00:07:21.786 16:18:55 -- accel/accel.sh@19 -- # read -r var val 00:07:21.786 16:18:55 -- accel/accel.sh@20 -- # val=1 00:07:21.786 16:18:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.786 16:18:55 -- accel/accel.sh@19 -- # IFS=: 00:07:21.786 16:18:55 -- accel/accel.sh@19 -- # read -r var val 00:07:21.786 16:18:55 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:21.786 16:18:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.786 16:18:55 -- accel/accel.sh@19 -- # IFS=: 00:07:21.786 16:18:55 -- accel/accel.sh@19 -- # read -r var val 00:07:21.786 16:18:55 -- accel/accel.sh@20 -- # val=Yes 00:07:21.786 16:18:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.786 16:18:55 -- accel/accel.sh@19 -- # IFS=: 00:07:21.786 16:18:55 -- accel/accel.sh@19 -- # read -r var val 00:07:21.786 16:18:55 -- accel/accel.sh@20 -- # val= 00:07:21.786 16:18:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.786 16:18:55 -- accel/accel.sh@19 -- # IFS=: 00:07:21.786 16:18:55 -- accel/accel.sh@19 -- # read -r var val 00:07:21.786 16:18:55 -- accel/accel.sh@20 -- # val= 00:07:21.786 16:18:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.786 16:18:55 -- accel/accel.sh@19 -- # IFS=: 00:07:21.786 16:18:55 -- accel/accel.sh@19 -- # read -r var val 00:07:23.160 16:18:56 -- accel/accel.sh@20 -- # val= 00:07:23.160 16:18:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.160 16:18:56 -- accel/accel.sh@19 -- # IFS=: 00:07:23.160 16:18:56 -- accel/accel.sh@19 -- # read -r var val 00:07:23.160 16:18:56 -- accel/accel.sh@20 -- # val= 00:07:23.160 16:18:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.160 16:18:56 -- accel/accel.sh@19 -- # IFS=: 00:07:23.160 16:18:56 -- accel/accel.sh@19 -- # read -r var val 00:07:23.160 16:18:56 -- accel/accel.sh@20 -- # val= 00:07:23.160 16:18:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.160 16:18:56 -- accel/accel.sh@19 -- # IFS=: 00:07:23.160 16:18:56 -- accel/accel.sh@19 -- # read -r var val 00:07:23.160 16:18:56 -- accel/accel.sh@20 -- # val= 00:07:23.160 16:18:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.160 16:18:56 -- accel/accel.sh@19 -- # IFS=: 00:07:23.160 16:18:56 -- accel/accel.sh@19 -- # read -r var val 00:07:23.160 16:18:56 -- accel/accel.sh@20 -- # val= 00:07:23.160 16:18:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.160 16:18:56 -- accel/accel.sh@19 -- # IFS=: 00:07:23.160 16:18:56 -- accel/accel.sh@19 -- # read -r var val 00:07:23.160 16:18:56 -- accel/accel.sh@20 -- # val= 00:07:23.160 16:18:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.160 16:18:56 -- accel/accel.sh@19 -- # IFS=: 00:07:23.160 16:18:56 -- accel/accel.sh@19 -- # read -r var val 00:07:23.160 16:18:56 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:23.160 16:18:56 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:23.160 ************************************ 00:07:23.160 END TEST accel_crc32c_C2 00:07:23.160 ************************************ 00:07:23.160 16:18:56 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.160 00:07:23.160 real 0m1.562s 00:07:23.160 user 0m1.341s 00:07:23.160 sys 0m0.122s 00:07:23.160 16:18:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:23.160 16:18:56 -- common/autotest_common.sh@10 -- # set +x 00:07:23.160 16:18:57 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:23.160 16:18:57 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:23.160 16:18:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:23.160 16:18:57 -- common/autotest_common.sh@10 -- # set +x 00:07:23.160 ************************************ 00:07:23.160 START TEST accel_copy 00:07:23.160 ************************************ 00:07:23.160 16:18:57 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:07:23.160 16:18:57 -- accel/accel.sh@16 -- # local accel_opc 00:07:23.160 16:18:57 -- accel/accel.sh@17 -- # local accel_module 00:07:23.160 16:18:57 -- accel/accel.sh@19 -- # IFS=: 00:07:23.160 16:18:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:23.160 16:18:57 -- accel/accel.sh@19 -- # read -r var val 00:07:23.160 16:18:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:23.160 16:18:57 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.160 16:18:57 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.160 16:18:57 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.160 16:18:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.160 16:18:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.160 16:18:57 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.160 16:18:57 -- accel/accel.sh@40 -- # local IFS=, 00:07:23.160 16:18:57 -- accel/accel.sh@41 -- # jq -r . 00:07:23.160 [2024-04-17 16:18:57.125365] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:07:23.160 [2024-04-17 16:18:57.125495] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63487 ] 00:07:23.421 [2024-04-17 16:18:57.267929] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.422 [2024-04-17 16:18:57.389685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.422 16:18:57 -- accel/accel.sh@20 -- # val= 00:07:23.422 16:18:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.422 16:18:57 -- accel/accel.sh@19 -- # IFS=: 00:07:23.422 16:18:57 -- accel/accel.sh@19 -- # read -r var val 00:07:23.422 16:18:57 -- accel/accel.sh@20 -- # val= 00:07:23.422 16:18:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.422 16:18:57 -- accel/accel.sh@19 -- # IFS=: 00:07:23.422 16:18:57 -- accel/accel.sh@19 -- # read -r var val 00:07:23.422 16:18:57 -- accel/accel.sh@20 -- # val=0x1 00:07:23.422 16:18:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.422 16:18:57 -- accel/accel.sh@19 -- # IFS=: 00:07:23.422 16:18:57 -- accel/accel.sh@19 -- # read -r var val 00:07:23.422 16:18:57 -- accel/accel.sh@20 -- # val= 00:07:23.422 16:18:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.422 16:18:57 -- accel/accel.sh@19 -- # IFS=: 00:07:23.422 16:18:57 -- accel/accel.sh@19 -- # read -r var val 00:07:23.422 16:18:57 -- accel/accel.sh@20 -- # val= 00:07:23.422 16:18:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.422 16:18:57 -- accel/accel.sh@19 -- # IFS=: 00:07:23.422 16:18:57 -- accel/accel.sh@19 -- # read -r var val 00:07:23.422 16:18:57 -- accel/accel.sh@20 -- # val=copy 00:07:23.422 16:18:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.422 16:18:57 -- accel/accel.sh@23 -- # accel_opc=copy 00:07:23.422 16:18:57 -- accel/accel.sh@19 -- # IFS=: 00:07:23.422 16:18:57 -- accel/accel.sh@19 -- # read -r var val 00:07:23.422 16:18:57 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:23.422 16:18:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.422 16:18:57 -- accel/accel.sh@19 -- # IFS=: 00:07:23.422 16:18:57 -- accel/accel.sh@19 -- # read -r var val 00:07:23.422 16:18:57 -- accel/accel.sh@20 -- # val= 00:07:23.422 16:18:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.422 16:18:57 -- accel/accel.sh@19 -- # IFS=: 00:07:23.422 16:18:57 -- accel/accel.sh@19 -- # read -r var val 00:07:23.422 16:18:57 -- accel/accel.sh@20 -- # val=software 00:07:23.422 16:18:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.422 16:18:57 -- accel/accel.sh@22 -- # accel_module=software 00:07:23.422 16:18:57 -- accel/accel.sh@19 -- # IFS=: 00:07:23.422 16:18:57 -- accel/accel.sh@19 -- # read -r var val 00:07:23.422 16:18:57 -- accel/accel.sh@20 -- # val=32 00:07:23.422 16:18:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.422 16:18:57 -- accel/accel.sh@19 -- # IFS=: 00:07:23.422 16:18:57 -- accel/accel.sh@19 -- # read -r var val 00:07:23.422 16:18:57 -- accel/accel.sh@20 -- # val=32 00:07:23.422 16:18:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.422 16:18:57 -- accel/accel.sh@19 -- # IFS=: 00:07:23.422 16:18:57 -- accel/accel.sh@19 -- # read -r var val 00:07:23.422 16:18:57 -- accel/accel.sh@20 -- # val=1 00:07:23.422 16:18:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.422 16:18:57 -- accel/accel.sh@19 -- # IFS=: 00:07:23.422 16:18:57 -- accel/accel.sh@19 -- # read -r var val 00:07:23.422 16:18:57 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:23.422 16:18:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.422 16:18:57 -- accel/accel.sh@19 -- # IFS=: 00:07:23.422 16:18:57 -- accel/accel.sh@19 -- # read -r var val 00:07:23.422 16:18:57 -- accel/accel.sh@20 -- # val=Yes 00:07:23.422 16:18:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.422 16:18:57 -- accel/accel.sh@19 -- # IFS=: 00:07:23.422 16:18:57 -- accel/accel.sh@19 -- # read -r var val 00:07:23.422 16:18:57 -- accel/accel.sh@20 -- # val= 00:07:23.422 16:18:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.422 16:18:57 -- accel/accel.sh@19 -- # IFS=: 00:07:23.422 16:18:57 -- accel/accel.sh@19 -- # read -r var val 00:07:23.422 16:18:57 -- accel/accel.sh@20 -- # val= 00:07:23.422 16:18:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.422 16:18:57 -- accel/accel.sh@19 -- # IFS=: 00:07:23.422 16:18:57 -- accel/accel.sh@19 -- # read -r var val 00:07:24.799 16:18:58 -- accel/accel.sh@20 -- # val= 00:07:24.799 16:18:58 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.799 16:18:58 -- accel/accel.sh@19 -- # IFS=: 00:07:24.799 16:18:58 -- accel/accel.sh@19 -- # read -r var val 00:07:24.799 16:18:58 -- accel/accel.sh@20 -- # val= 00:07:24.799 16:18:58 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.799 16:18:58 -- accel/accel.sh@19 -- # IFS=: 00:07:24.799 16:18:58 -- accel/accel.sh@19 -- # read -r var val 00:07:24.799 16:18:58 -- accel/accel.sh@20 -- # val= 00:07:24.799 16:18:58 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.799 16:18:58 -- accel/accel.sh@19 -- # IFS=: 00:07:24.799 16:18:58 -- accel/accel.sh@19 -- # read -r var val 00:07:24.799 16:18:58 -- accel/accel.sh@20 -- # val= 00:07:24.799 16:18:58 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.799 16:18:58 -- accel/accel.sh@19 -- # IFS=: 00:07:24.799 16:18:58 -- accel/accel.sh@19 -- # read -r var val 00:07:24.799 16:18:58 -- accel/accel.sh@20 -- # val= 00:07:24.799 16:18:58 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.799 16:18:58 -- accel/accel.sh@19 -- # IFS=: 00:07:24.799 16:18:58 -- accel/accel.sh@19 -- # read -r var val 00:07:24.799 16:18:58 -- accel/accel.sh@20 -- # val= 00:07:24.799 16:18:58 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.799 16:18:58 -- accel/accel.sh@19 -- # IFS=: 00:07:24.799 16:18:58 -- accel/accel.sh@19 -- # read -r var val 00:07:24.799 ************************************ 00:07:24.799 END TEST accel_copy 00:07:24.799 ************************************ 00:07:24.799 16:18:58 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:24.799 16:18:58 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:24.799 16:18:58 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.799 00:07:24.799 real 0m1.546s 00:07:24.799 user 0m1.329s 00:07:24.799 sys 0m0.122s 00:07:24.799 16:18:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:24.799 16:18:58 -- common/autotest_common.sh@10 -- # set +x 00:07:24.799 16:18:58 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:24.799 16:18:58 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:24.799 16:18:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:24.799 16:18:58 -- common/autotest_common.sh@10 -- # set +x 00:07:24.799 ************************************ 00:07:24.799 START TEST accel_fill 00:07:24.799 ************************************ 00:07:24.799 16:18:58 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:24.799 16:18:58 -- accel/accel.sh@16 -- # local accel_opc 00:07:24.799 16:18:58 -- accel/accel.sh@17 -- # local accel_module 00:07:24.799 16:18:58 -- accel/accel.sh@19 -- # IFS=: 00:07:24.799 16:18:58 -- accel/accel.sh@19 -- # read -r var val 00:07:24.799 16:18:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:24.799 16:18:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:24.799 16:18:58 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.799 16:18:58 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.799 16:18:58 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.799 16:18:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.799 16:18:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.799 16:18:58 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.799 16:18:58 -- accel/accel.sh@40 -- # local IFS=, 00:07:24.799 16:18:58 -- accel/accel.sh@41 -- # jq -r . 00:07:24.799 [2024-04-17 16:18:58.782471] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:07:24.799 [2024-04-17 16:18:58.782597] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63527 ] 00:07:25.058 [2024-04-17 16:18:58.929378] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.058 [2024-04-17 16:18:59.049127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.331 16:18:59 -- accel/accel.sh@20 -- # val= 00:07:25.331 16:18:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.331 16:18:59 -- accel/accel.sh@19 -- # IFS=: 00:07:25.331 16:18:59 -- accel/accel.sh@19 -- # read -r var val 00:07:25.331 16:18:59 -- accel/accel.sh@20 -- # val= 00:07:25.331 16:18:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.331 16:18:59 -- accel/accel.sh@19 -- # IFS=: 00:07:25.331 16:18:59 -- accel/accel.sh@19 -- # read -r var val 00:07:25.331 16:18:59 -- accel/accel.sh@20 -- # val=0x1 00:07:25.331 16:18:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.331 16:18:59 -- accel/accel.sh@19 -- # IFS=: 00:07:25.331 16:18:59 -- accel/accel.sh@19 -- # read -r var val 00:07:25.331 16:18:59 -- accel/accel.sh@20 -- # val= 00:07:25.331 16:18:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.331 16:18:59 -- accel/accel.sh@19 -- # IFS=: 00:07:25.331 16:18:59 -- accel/accel.sh@19 -- # read -r var val 00:07:25.331 16:18:59 -- accel/accel.sh@20 -- # val= 00:07:25.331 16:18:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.331 16:18:59 -- accel/accel.sh@19 -- # IFS=: 00:07:25.331 16:18:59 -- accel/accel.sh@19 -- # read -r var val 00:07:25.331 16:18:59 -- accel/accel.sh@20 -- # val=fill 00:07:25.331 16:18:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.331 16:18:59 -- accel/accel.sh@23 -- # accel_opc=fill 00:07:25.331 16:18:59 -- accel/accel.sh@19 -- # IFS=: 00:07:25.331 16:18:59 -- accel/accel.sh@19 -- # read -r var val 00:07:25.331 16:18:59 -- accel/accel.sh@20 -- # val=0x80 00:07:25.331 16:18:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.331 16:18:59 -- accel/accel.sh@19 -- # IFS=: 00:07:25.331 16:18:59 -- accel/accel.sh@19 -- # read -r var val 00:07:25.331 16:18:59 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:25.331 16:18:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.331 16:18:59 -- accel/accel.sh@19 -- # IFS=: 00:07:25.331 16:18:59 -- accel/accel.sh@19 -- # read -r var val 00:07:25.331 16:18:59 -- accel/accel.sh@20 -- # val= 00:07:25.331 16:18:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.331 16:18:59 -- accel/accel.sh@19 -- # IFS=: 00:07:25.331 16:18:59 -- accel/accel.sh@19 -- # read -r var val 00:07:25.331 16:18:59 -- accel/accel.sh@20 -- # val=software 00:07:25.331 16:18:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.331 16:18:59 -- accel/accel.sh@22 -- # accel_module=software 00:07:25.331 16:18:59 -- accel/accel.sh@19 -- # IFS=: 00:07:25.331 16:18:59 -- accel/accel.sh@19 -- # read -r var val 00:07:25.331 16:18:59 -- accel/accel.sh@20 -- # val=64 00:07:25.331 16:18:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.331 16:18:59 -- accel/accel.sh@19 -- # IFS=: 00:07:25.331 16:18:59 -- accel/accel.sh@19 -- # read -r var val 00:07:25.331 16:18:59 -- accel/accel.sh@20 -- # val=64 00:07:25.331 16:18:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.331 16:18:59 -- accel/accel.sh@19 -- # IFS=: 00:07:25.331 16:18:59 -- accel/accel.sh@19 -- # read -r var val 00:07:25.331 16:18:59 -- accel/accel.sh@20 -- # val=1 00:07:25.331 16:18:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.331 16:18:59 -- accel/accel.sh@19 -- # IFS=: 00:07:25.331 16:18:59 -- accel/accel.sh@19 -- # read -r var val 00:07:25.331 16:18:59 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:25.331 16:18:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.331 16:18:59 -- accel/accel.sh@19 -- # IFS=: 00:07:25.331 16:18:59 -- accel/accel.sh@19 -- # read -r var val 00:07:25.331 16:18:59 -- accel/accel.sh@20 -- # val=Yes 00:07:25.331 16:18:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.331 16:18:59 -- accel/accel.sh@19 -- # IFS=: 00:07:25.331 16:18:59 -- accel/accel.sh@19 -- # read -r var val 00:07:25.331 16:18:59 -- accel/accel.sh@20 -- # val= 00:07:25.331 16:18:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.331 16:18:59 -- accel/accel.sh@19 -- # IFS=: 00:07:25.331 16:18:59 -- accel/accel.sh@19 -- # read -r var val 00:07:25.331 16:18:59 -- accel/accel.sh@20 -- # val= 00:07:25.331 16:18:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.331 16:18:59 -- accel/accel.sh@19 -- # IFS=: 00:07:25.331 16:18:59 -- accel/accel.sh@19 -- # read -r var val 00:07:26.265 16:19:00 -- accel/accel.sh@20 -- # val= 00:07:26.265 16:19:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.265 16:19:00 -- accel/accel.sh@19 -- # IFS=: 00:07:26.265 16:19:00 -- accel/accel.sh@19 -- # read -r var val 00:07:26.265 16:19:00 -- accel/accel.sh@20 -- # val= 00:07:26.265 16:19:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.265 16:19:00 -- accel/accel.sh@19 -- # IFS=: 00:07:26.265 16:19:00 -- accel/accel.sh@19 -- # read -r var val 00:07:26.265 16:19:00 -- accel/accel.sh@20 -- # val= 00:07:26.265 16:19:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.265 16:19:00 -- accel/accel.sh@19 -- # IFS=: 00:07:26.265 16:19:00 -- accel/accel.sh@19 -- # read -r var val 00:07:26.265 16:19:00 -- accel/accel.sh@20 -- # val= 00:07:26.265 16:19:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.265 16:19:00 -- accel/accel.sh@19 -- # IFS=: 00:07:26.265 16:19:00 -- accel/accel.sh@19 -- # read -r var val 00:07:26.265 16:19:00 -- accel/accel.sh@20 -- # val= 00:07:26.265 16:19:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.265 16:19:00 -- accel/accel.sh@19 -- # IFS=: 00:07:26.265 ************************************ 00:07:26.265 END TEST accel_fill 00:07:26.265 ************************************ 00:07:26.265 16:19:00 -- accel/accel.sh@19 -- # read -r var val 00:07:26.265 16:19:00 -- accel/accel.sh@20 -- # val= 00:07:26.265 16:19:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.265 16:19:00 -- accel/accel.sh@19 -- # IFS=: 00:07:26.265 16:19:00 -- accel/accel.sh@19 -- # read -r var val 00:07:26.265 16:19:00 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:26.265 16:19:00 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:26.265 16:19:00 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.265 00:07:26.265 real 0m1.549s 00:07:26.265 user 0m1.334s 00:07:26.265 sys 0m0.116s 00:07:26.265 16:19:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:26.265 16:19:00 -- common/autotest_common.sh@10 -- # set +x 00:07:26.524 16:19:00 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:26.524 16:19:00 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:26.524 16:19:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:26.524 16:19:00 -- common/autotest_common.sh@10 -- # set +x 00:07:26.524 ************************************ 00:07:26.524 START TEST accel_copy_crc32c 00:07:26.524 ************************************ 00:07:26.524 16:19:00 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:07:26.524 16:19:00 -- accel/accel.sh@16 -- # local accel_opc 00:07:26.524 16:19:00 -- accel/accel.sh@17 -- # local accel_module 00:07:26.524 16:19:00 -- accel/accel.sh@19 -- # IFS=: 00:07:26.524 16:19:00 -- accel/accel.sh@19 -- # read -r var val 00:07:26.524 16:19:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:26.524 16:19:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:26.524 16:19:00 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.524 16:19:00 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.524 16:19:00 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.524 16:19:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.524 16:19:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.524 16:19:00 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.524 16:19:00 -- accel/accel.sh@40 -- # local IFS=, 00:07:26.524 16:19:00 -- accel/accel.sh@41 -- # jq -r . 00:07:26.524 [2024-04-17 16:19:00.448354] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:07:26.524 [2024-04-17 16:19:00.448455] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63565 ] 00:07:26.782 [2024-04-17 16:19:00.590982] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.782 [2024-04-17 16:19:00.721715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.782 16:19:00 -- accel/accel.sh@20 -- # val= 00:07:26.782 16:19:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.782 16:19:00 -- accel/accel.sh@19 -- # IFS=: 00:07:26.782 16:19:00 -- accel/accel.sh@19 -- # read -r var val 00:07:26.782 16:19:00 -- accel/accel.sh@20 -- # val= 00:07:26.782 16:19:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.782 16:19:00 -- accel/accel.sh@19 -- # IFS=: 00:07:26.782 16:19:00 -- accel/accel.sh@19 -- # read -r var val 00:07:26.782 16:19:00 -- accel/accel.sh@20 -- # val=0x1 00:07:26.782 16:19:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.782 16:19:00 -- accel/accel.sh@19 -- # IFS=: 00:07:26.782 16:19:00 -- accel/accel.sh@19 -- # read -r var val 00:07:26.782 16:19:00 -- accel/accel.sh@20 -- # val= 00:07:26.782 16:19:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.782 16:19:00 -- accel/accel.sh@19 -- # IFS=: 00:07:26.782 16:19:00 -- accel/accel.sh@19 -- # read -r var val 00:07:26.782 16:19:00 -- accel/accel.sh@20 -- # val= 00:07:26.782 16:19:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.782 16:19:00 -- accel/accel.sh@19 -- # IFS=: 00:07:26.782 16:19:00 -- accel/accel.sh@19 -- # read -r var val 00:07:26.782 16:19:00 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:26.782 16:19:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.782 16:19:00 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:26.782 16:19:00 -- accel/accel.sh@19 -- # IFS=: 00:07:26.782 16:19:00 -- accel/accel.sh@19 -- # read -r var val 00:07:26.782 16:19:00 -- accel/accel.sh@20 -- # val=0 00:07:26.782 16:19:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.782 16:19:00 -- accel/accel.sh@19 -- # IFS=: 00:07:26.782 16:19:00 -- accel/accel.sh@19 -- # read -r var val 00:07:26.782 16:19:00 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:26.782 16:19:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.782 16:19:00 -- accel/accel.sh@19 -- # IFS=: 00:07:26.782 16:19:00 -- accel/accel.sh@19 -- # read -r var val 00:07:26.782 16:19:00 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:26.782 16:19:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.782 16:19:00 -- accel/accel.sh@19 -- # IFS=: 00:07:26.782 16:19:00 -- accel/accel.sh@19 -- # read -r var val 00:07:26.782 16:19:00 -- accel/accel.sh@20 -- # val= 00:07:26.782 16:19:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.782 16:19:00 -- accel/accel.sh@19 -- # IFS=: 00:07:26.782 16:19:00 -- accel/accel.sh@19 -- # read -r var val 00:07:26.782 16:19:00 -- accel/accel.sh@20 -- # val=software 00:07:26.782 16:19:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.782 16:19:00 -- accel/accel.sh@22 -- # accel_module=software 00:07:26.782 16:19:00 -- accel/accel.sh@19 -- # IFS=: 00:07:26.782 16:19:00 -- accel/accel.sh@19 -- # read -r var val 00:07:26.783 16:19:00 -- accel/accel.sh@20 -- # val=32 00:07:26.783 16:19:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.783 16:19:00 -- accel/accel.sh@19 -- # IFS=: 00:07:26.783 16:19:00 -- accel/accel.sh@19 -- # read -r var val 00:07:26.783 16:19:00 -- accel/accel.sh@20 -- # val=32 00:07:26.783 16:19:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.783 16:19:00 -- accel/accel.sh@19 -- # IFS=: 00:07:26.783 16:19:00 -- accel/accel.sh@19 -- # read -r var val 00:07:26.783 16:19:00 -- accel/accel.sh@20 -- # val=1 00:07:26.783 16:19:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.783 16:19:00 -- accel/accel.sh@19 -- # IFS=: 00:07:26.783 16:19:00 -- accel/accel.sh@19 -- # read -r var val 00:07:26.783 16:19:00 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:26.783 16:19:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.783 16:19:00 -- accel/accel.sh@19 -- # IFS=: 00:07:26.783 16:19:00 -- accel/accel.sh@19 -- # read -r var val 00:07:26.783 16:19:00 -- accel/accel.sh@20 -- # val=Yes 00:07:26.783 16:19:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.783 16:19:00 -- accel/accel.sh@19 -- # IFS=: 00:07:26.783 16:19:00 -- accel/accel.sh@19 -- # read -r var val 00:07:26.783 16:19:00 -- accel/accel.sh@20 -- # val= 00:07:26.783 16:19:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.783 16:19:00 -- accel/accel.sh@19 -- # IFS=: 00:07:26.783 16:19:00 -- accel/accel.sh@19 -- # read -r var val 00:07:26.783 16:19:00 -- accel/accel.sh@20 -- # val= 00:07:26.783 16:19:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.783 16:19:00 -- accel/accel.sh@19 -- # IFS=: 00:07:26.783 16:19:00 -- accel/accel.sh@19 -- # read -r var val 00:07:28.158 16:19:01 -- accel/accel.sh@20 -- # val= 00:07:28.158 16:19:01 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.158 16:19:01 -- accel/accel.sh@19 -- # IFS=: 00:07:28.158 16:19:01 -- accel/accel.sh@19 -- # read -r var val 00:07:28.159 16:19:01 -- accel/accel.sh@20 -- # val= 00:07:28.159 16:19:01 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.159 16:19:01 -- accel/accel.sh@19 -- # IFS=: 00:07:28.159 16:19:01 -- accel/accel.sh@19 -- # read -r var val 00:07:28.159 16:19:01 -- accel/accel.sh@20 -- # val= 00:07:28.159 16:19:01 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.159 16:19:01 -- accel/accel.sh@19 -- # IFS=: 00:07:28.159 16:19:01 -- accel/accel.sh@19 -- # read -r var val 00:07:28.159 16:19:01 -- accel/accel.sh@20 -- # val= 00:07:28.159 16:19:01 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.159 16:19:01 -- accel/accel.sh@19 -- # IFS=: 00:07:28.159 16:19:01 -- accel/accel.sh@19 -- # read -r var val 00:07:28.159 16:19:01 -- accel/accel.sh@20 -- # val= 00:07:28.159 16:19:01 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.159 16:19:01 -- accel/accel.sh@19 -- # IFS=: 00:07:28.159 16:19:01 -- accel/accel.sh@19 -- # read -r var val 00:07:28.159 16:19:01 -- accel/accel.sh@20 -- # val= 00:07:28.159 16:19:01 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.159 16:19:01 -- accel/accel.sh@19 -- # IFS=: 00:07:28.159 16:19:01 -- accel/accel.sh@19 -- # read -r var val 00:07:28.159 16:19:01 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:28.159 16:19:01 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:28.159 16:19:01 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.159 00:07:28.159 real 0m1.553s 00:07:28.159 user 0m1.333s 00:07:28.159 sys 0m0.122s 00:07:28.159 16:19:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:28.159 ************************************ 00:07:28.159 END TEST accel_copy_crc32c 00:07:28.159 ************************************ 00:07:28.159 16:19:01 -- common/autotest_common.sh@10 -- # set +x 00:07:28.159 16:19:02 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:28.159 16:19:02 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:28.159 16:19:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:28.159 16:19:02 -- common/autotest_common.sh@10 -- # set +x 00:07:28.159 ************************************ 00:07:28.159 START TEST accel_copy_crc32c_C2 00:07:28.159 ************************************ 00:07:28.159 16:19:02 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:28.159 16:19:02 -- accel/accel.sh@16 -- # local accel_opc 00:07:28.159 16:19:02 -- accel/accel.sh@17 -- # local accel_module 00:07:28.159 16:19:02 -- accel/accel.sh@19 -- # IFS=: 00:07:28.159 16:19:02 -- accel/accel.sh@19 -- # read -r var val 00:07:28.159 16:19:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:28.159 16:19:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:28.159 16:19:02 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.159 16:19:02 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.159 16:19:02 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.159 16:19:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.159 16:19:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.159 16:19:02 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.159 16:19:02 -- accel/accel.sh@40 -- # local IFS=, 00:07:28.159 16:19:02 -- accel/accel.sh@41 -- # jq -r . 00:07:28.159 [2024-04-17 16:19:02.114644] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:07:28.159 [2024-04-17 16:19:02.114786] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63605 ] 00:07:28.417 [2024-04-17 16:19:02.255978] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.417 [2024-04-17 16:19:02.389084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.417 16:19:02 -- accel/accel.sh@20 -- # val= 00:07:28.417 16:19:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.417 16:19:02 -- accel/accel.sh@19 -- # IFS=: 00:07:28.417 16:19:02 -- accel/accel.sh@19 -- # read -r var val 00:07:28.417 16:19:02 -- accel/accel.sh@20 -- # val= 00:07:28.417 16:19:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.417 16:19:02 -- accel/accel.sh@19 -- # IFS=: 00:07:28.417 16:19:02 -- accel/accel.sh@19 -- # read -r var val 00:07:28.417 16:19:02 -- accel/accel.sh@20 -- # val=0x1 00:07:28.417 16:19:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.417 16:19:02 -- accel/accel.sh@19 -- # IFS=: 00:07:28.417 16:19:02 -- accel/accel.sh@19 -- # read -r var val 00:07:28.417 16:19:02 -- accel/accel.sh@20 -- # val= 00:07:28.417 16:19:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.417 16:19:02 -- accel/accel.sh@19 -- # IFS=: 00:07:28.417 16:19:02 -- accel/accel.sh@19 -- # read -r var val 00:07:28.417 16:19:02 -- accel/accel.sh@20 -- # val= 00:07:28.417 16:19:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.417 16:19:02 -- accel/accel.sh@19 -- # IFS=: 00:07:28.417 16:19:02 -- accel/accel.sh@19 -- # read -r var val 00:07:28.417 16:19:02 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:28.417 16:19:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.417 16:19:02 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:28.417 16:19:02 -- accel/accel.sh@19 -- # IFS=: 00:07:28.417 16:19:02 -- accel/accel.sh@19 -- # read -r var val 00:07:28.417 16:19:02 -- accel/accel.sh@20 -- # val=0 00:07:28.417 16:19:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.417 16:19:02 -- accel/accel.sh@19 -- # IFS=: 00:07:28.417 16:19:02 -- accel/accel.sh@19 -- # read -r var val 00:07:28.417 16:19:02 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:28.417 16:19:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.417 16:19:02 -- accel/accel.sh@19 -- # IFS=: 00:07:28.417 16:19:02 -- accel/accel.sh@19 -- # read -r var val 00:07:28.417 16:19:02 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:28.417 16:19:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.417 16:19:02 -- accel/accel.sh@19 -- # IFS=: 00:07:28.417 16:19:02 -- accel/accel.sh@19 -- # read -r var val 00:07:28.417 16:19:02 -- accel/accel.sh@20 -- # val= 00:07:28.417 16:19:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.417 16:19:02 -- accel/accel.sh@19 -- # IFS=: 00:07:28.417 16:19:02 -- accel/accel.sh@19 -- # read -r var val 00:07:28.417 16:19:02 -- accel/accel.sh@20 -- # val=software 00:07:28.417 16:19:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.417 16:19:02 -- accel/accel.sh@22 -- # accel_module=software 00:07:28.417 16:19:02 -- accel/accel.sh@19 -- # IFS=: 00:07:28.417 16:19:02 -- accel/accel.sh@19 -- # read -r var val 00:07:28.417 16:19:02 -- accel/accel.sh@20 -- # val=32 00:07:28.417 16:19:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.417 16:19:02 -- accel/accel.sh@19 -- # IFS=: 00:07:28.417 16:19:02 -- accel/accel.sh@19 -- # read -r var val 00:07:28.417 16:19:02 -- accel/accel.sh@20 -- # val=32 00:07:28.417 16:19:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.417 16:19:02 -- accel/accel.sh@19 -- # IFS=: 00:07:28.417 16:19:02 -- accel/accel.sh@19 -- # read -r var val 00:07:28.417 16:19:02 -- accel/accel.sh@20 -- # val=1 00:07:28.675 16:19:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.675 16:19:02 -- accel/accel.sh@19 -- # IFS=: 00:07:28.675 16:19:02 -- accel/accel.sh@19 -- # read -r var val 00:07:28.675 16:19:02 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:28.675 16:19:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.675 16:19:02 -- accel/accel.sh@19 -- # IFS=: 00:07:28.675 16:19:02 -- accel/accel.sh@19 -- # read -r var val 00:07:28.675 16:19:02 -- accel/accel.sh@20 -- # val=Yes 00:07:28.675 16:19:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.675 16:19:02 -- accel/accel.sh@19 -- # IFS=: 00:07:28.675 16:19:02 -- accel/accel.sh@19 -- # read -r var val 00:07:28.676 16:19:02 -- accel/accel.sh@20 -- # val= 00:07:28.676 16:19:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.676 16:19:02 -- accel/accel.sh@19 -- # IFS=: 00:07:28.676 16:19:02 -- accel/accel.sh@19 -- # read -r var val 00:07:28.676 16:19:02 -- accel/accel.sh@20 -- # val= 00:07:28.676 16:19:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.676 16:19:02 -- accel/accel.sh@19 -- # IFS=: 00:07:28.676 16:19:02 -- accel/accel.sh@19 -- # read -r var val 00:07:29.609 16:19:03 -- accel/accel.sh@20 -- # val= 00:07:29.609 16:19:03 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.609 16:19:03 -- accel/accel.sh@19 -- # IFS=: 00:07:29.609 16:19:03 -- accel/accel.sh@19 -- # read -r var val 00:07:29.609 16:19:03 -- accel/accel.sh@20 -- # val= 00:07:29.609 16:19:03 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.609 16:19:03 -- accel/accel.sh@19 -- # IFS=: 00:07:29.609 16:19:03 -- accel/accel.sh@19 -- # read -r var val 00:07:29.609 16:19:03 -- accel/accel.sh@20 -- # val= 00:07:29.609 16:19:03 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.609 16:19:03 -- accel/accel.sh@19 -- # IFS=: 00:07:29.609 16:19:03 -- accel/accel.sh@19 -- # read -r var val 00:07:29.609 16:19:03 -- accel/accel.sh@20 -- # val= 00:07:29.609 16:19:03 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.609 16:19:03 -- accel/accel.sh@19 -- # IFS=: 00:07:29.609 16:19:03 -- accel/accel.sh@19 -- # read -r var val 00:07:29.609 16:19:03 -- accel/accel.sh@20 -- # val= 00:07:29.609 16:19:03 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.609 16:19:03 -- accel/accel.sh@19 -- # IFS=: 00:07:29.609 16:19:03 -- accel/accel.sh@19 -- # read -r var val 00:07:29.609 16:19:03 -- accel/accel.sh@20 -- # val= 00:07:29.609 16:19:03 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.609 16:19:03 -- accel/accel.sh@19 -- # IFS=: 00:07:29.609 16:19:03 -- accel/accel.sh@19 -- # read -r var val 00:07:29.609 16:19:03 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:29.609 16:19:03 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:29.609 16:19:03 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.609 00:07:29.609 real 0m1.561s 00:07:29.609 user 0m1.345s 00:07:29.609 sys 0m0.117s 00:07:29.609 16:19:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:29.866 ************************************ 00:07:29.866 END TEST accel_copy_crc32c_C2 00:07:29.866 ************************************ 00:07:29.866 16:19:03 -- common/autotest_common.sh@10 -- # set +x 00:07:29.867 16:19:03 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:29.867 16:19:03 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:29.867 16:19:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:29.867 16:19:03 -- common/autotest_common.sh@10 -- # set +x 00:07:29.867 ************************************ 00:07:29.867 START TEST accel_dualcast 00:07:29.867 ************************************ 00:07:29.867 16:19:03 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:07:29.867 16:19:03 -- accel/accel.sh@16 -- # local accel_opc 00:07:29.867 16:19:03 -- accel/accel.sh@17 -- # local accel_module 00:07:29.867 16:19:03 -- accel/accel.sh@19 -- # IFS=: 00:07:29.867 16:19:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:29.867 16:19:03 -- accel/accel.sh@19 -- # read -r var val 00:07:29.867 16:19:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:29.867 16:19:03 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.867 16:19:03 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.867 16:19:03 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.867 16:19:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.867 16:19:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.867 16:19:03 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.867 16:19:03 -- accel/accel.sh@40 -- # local IFS=, 00:07:29.867 16:19:03 -- accel/accel.sh@41 -- # jq -r . 00:07:29.867 [2024-04-17 16:19:03.781212] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:07:29.867 [2024-04-17 16:19:03.781295] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63649 ] 00:07:30.126 [2024-04-17 16:19:03.915631] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.126 [2024-04-17 16:19:04.046542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.126 16:19:04 -- accel/accel.sh@20 -- # val= 00:07:30.126 16:19:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.126 16:19:04 -- accel/accel.sh@19 -- # IFS=: 00:07:30.126 16:19:04 -- accel/accel.sh@19 -- # read -r var val 00:07:30.126 16:19:04 -- accel/accel.sh@20 -- # val= 00:07:30.126 16:19:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.126 16:19:04 -- accel/accel.sh@19 -- # IFS=: 00:07:30.126 16:19:04 -- accel/accel.sh@19 -- # read -r var val 00:07:30.126 16:19:04 -- accel/accel.sh@20 -- # val=0x1 00:07:30.126 16:19:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.126 16:19:04 -- accel/accel.sh@19 -- # IFS=: 00:07:30.126 16:19:04 -- accel/accel.sh@19 -- # read -r var val 00:07:30.126 16:19:04 -- accel/accel.sh@20 -- # val= 00:07:30.126 16:19:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.126 16:19:04 -- accel/accel.sh@19 -- # IFS=: 00:07:30.126 16:19:04 -- accel/accel.sh@19 -- # read -r var val 00:07:30.126 16:19:04 -- accel/accel.sh@20 -- # val= 00:07:30.126 16:19:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.126 16:19:04 -- accel/accel.sh@19 -- # IFS=: 00:07:30.126 16:19:04 -- accel/accel.sh@19 -- # read -r var val 00:07:30.126 16:19:04 -- accel/accel.sh@20 -- # val=dualcast 00:07:30.126 16:19:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.126 16:19:04 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:30.126 16:19:04 -- accel/accel.sh@19 -- # IFS=: 00:07:30.126 16:19:04 -- accel/accel.sh@19 -- # read -r var val 00:07:30.127 16:19:04 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:30.127 16:19:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.127 16:19:04 -- accel/accel.sh@19 -- # IFS=: 00:07:30.127 16:19:04 -- accel/accel.sh@19 -- # read -r var val 00:07:30.127 16:19:04 -- accel/accel.sh@20 -- # val= 00:07:30.127 16:19:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.127 16:19:04 -- accel/accel.sh@19 -- # IFS=: 00:07:30.127 16:19:04 -- accel/accel.sh@19 -- # read -r var val 00:07:30.127 16:19:04 -- accel/accel.sh@20 -- # val=software 00:07:30.127 16:19:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.127 16:19:04 -- accel/accel.sh@22 -- # accel_module=software 00:07:30.127 16:19:04 -- accel/accel.sh@19 -- # IFS=: 00:07:30.127 16:19:04 -- accel/accel.sh@19 -- # read -r var val 00:07:30.127 16:19:04 -- accel/accel.sh@20 -- # val=32 00:07:30.127 16:19:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.127 16:19:04 -- accel/accel.sh@19 -- # IFS=: 00:07:30.127 16:19:04 -- accel/accel.sh@19 -- # read -r var val 00:07:30.127 16:19:04 -- accel/accel.sh@20 -- # val=32 00:07:30.127 16:19:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.127 16:19:04 -- accel/accel.sh@19 -- # IFS=: 00:07:30.127 16:19:04 -- accel/accel.sh@19 -- # read -r var val 00:07:30.127 16:19:04 -- accel/accel.sh@20 -- # val=1 00:07:30.127 16:19:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.127 16:19:04 -- accel/accel.sh@19 -- # IFS=: 00:07:30.127 16:19:04 -- accel/accel.sh@19 -- # read -r var val 00:07:30.127 16:19:04 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:30.127 16:19:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.127 16:19:04 -- accel/accel.sh@19 -- # IFS=: 00:07:30.127 16:19:04 -- accel/accel.sh@19 -- # read -r var val 00:07:30.127 16:19:04 -- accel/accel.sh@20 -- # val=Yes 00:07:30.127 16:19:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.127 16:19:04 -- accel/accel.sh@19 -- # IFS=: 00:07:30.127 16:19:04 -- accel/accel.sh@19 -- # read -r var val 00:07:30.127 16:19:04 -- accel/accel.sh@20 -- # val= 00:07:30.127 16:19:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.127 16:19:04 -- accel/accel.sh@19 -- # IFS=: 00:07:30.127 16:19:04 -- accel/accel.sh@19 -- # read -r var val 00:07:30.127 16:19:04 -- accel/accel.sh@20 -- # val= 00:07:30.127 16:19:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.127 16:19:04 -- accel/accel.sh@19 -- # IFS=: 00:07:30.127 16:19:04 -- accel/accel.sh@19 -- # read -r var val 00:07:31.501 16:19:05 -- accel/accel.sh@20 -- # val= 00:07:31.501 16:19:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.501 16:19:05 -- accel/accel.sh@19 -- # IFS=: 00:07:31.501 16:19:05 -- accel/accel.sh@19 -- # read -r var val 00:07:31.501 16:19:05 -- accel/accel.sh@20 -- # val= 00:07:31.501 16:19:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.501 16:19:05 -- accel/accel.sh@19 -- # IFS=: 00:07:31.501 16:19:05 -- accel/accel.sh@19 -- # read -r var val 00:07:31.501 16:19:05 -- accel/accel.sh@20 -- # val= 00:07:31.501 16:19:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.501 16:19:05 -- accel/accel.sh@19 -- # IFS=: 00:07:31.501 16:19:05 -- accel/accel.sh@19 -- # read -r var val 00:07:31.501 16:19:05 -- accel/accel.sh@20 -- # val= 00:07:31.501 16:19:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.501 16:19:05 -- accel/accel.sh@19 -- # IFS=: 00:07:31.501 16:19:05 -- accel/accel.sh@19 -- # read -r var val 00:07:31.501 16:19:05 -- accel/accel.sh@20 -- # val= 00:07:31.501 16:19:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.501 16:19:05 -- accel/accel.sh@19 -- # IFS=: 00:07:31.501 16:19:05 -- accel/accel.sh@19 -- # read -r var val 00:07:31.501 16:19:05 -- accel/accel.sh@20 -- # val= 00:07:31.501 16:19:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.501 16:19:05 -- accel/accel.sh@19 -- # IFS=: 00:07:31.501 16:19:05 -- accel/accel.sh@19 -- # read -r var val 00:07:31.501 16:19:05 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:31.501 16:19:05 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:31.501 16:19:05 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.501 00:07:31.501 real 0m1.545s 00:07:31.501 user 0m1.334s 00:07:31.501 sys 0m0.111s 00:07:31.501 16:19:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:31.501 16:19:05 -- common/autotest_common.sh@10 -- # set +x 00:07:31.501 ************************************ 00:07:31.501 END TEST accel_dualcast 00:07:31.501 ************************************ 00:07:31.501 16:19:05 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:31.501 16:19:05 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:31.501 16:19:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:31.501 16:19:05 -- common/autotest_common.sh@10 -- # set +x 00:07:31.501 ************************************ 00:07:31.501 START TEST accel_compare 00:07:31.501 ************************************ 00:07:31.501 16:19:05 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:07:31.501 16:19:05 -- accel/accel.sh@16 -- # local accel_opc 00:07:31.501 16:19:05 -- accel/accel.sh@17 -- # local accel_module 00:07:31.501 16:19:05 -- accel/accel.sh@19 -- # IFS=: 00:07:31.501 16:19:05 -- accel/accel.sh@19 -- # read -r var val 00:07:31.501 16:19:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:31.501 16:19:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:31.501 16:19:05 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.501 16:19:05 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.501 16:19:05 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.501 16:19:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.501 16:19:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.501 16:19:05 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:31.501 16:19:05 -- accel/accel.sh@40 -- # local IFS=, 00:07:31.501 16:19:05 -- accel/accel.sh@41 -- # jq -r . 00:07:31.501 [2024-04-17 16:19:05.453591] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:07:31.501 [2024-04-17 16:19:05.453677] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63687 ] 00:07:31.759 [2024-04-17 16:19:05.589860] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.759 [2024-04-17 16:19:05.723230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.759 16:19:05 -- accel/accel.sh@20 -- # val= 00:07:31.759 16:19:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.759 16:19:05 -- accel/accel.sh@19 -- # IFS=: 00:07:31.759 16:19:05 -- accel/accel.sh@19 -- # read -r var val 00:07:31.759 16:19:05 -- accel/accel.sh@20 -- # val= 00:07:31.759 16:19:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.760 16:19:05 -- accel/accel.sh@19 -- # IFS=: 00:07:31.760 16:19:05 -- accel/accel.sh@19 -- # read -r var val 00:07:31.760 16:19:05 -- accel/accel.sh@20 -- # val=0x1 00:07:31.760 16:19:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.760 16:19:05 -- accel/accel.sh@19 -- # IFS=: 00:07:31.760 16:19:05 -- accel/accel.sh@19 -- # read -r var val 00:07:31.760 16:19:05 -- accel/accel.sh@20 -- # val= 00:07:31.760 16:19:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.760 16:19:05 -- accel/accel.sh@19 -- # IFS=: 00:07:31.760 16:19:05 -- accel/accel.sh@19 -- # read -r var val 00:07:31.760 16:19:05 -- accel/accel.sh@20 -- # val= 00:07:31.760 16:19:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.760 16:19:05 -- accel/accel.sh@19 -- # IFS=: 00:07:31.760 16:19:05 -- accel/accel.sh@19 -- # read -r var val 00:07:31.760 16:19:05 -- accel/accel.sh@20 -- # val=compare 00:07:31.760 16:19:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.760 16:19:05 -- accel/accel.sh@23 -- # accel_opc=compare 00:07:31.760 16:19:05 -- accel/accel.sh@19 -- # IFS=: 00:07:31.760 16:19:05 -- accel/accel.sh@19 -- # read -r var val 00:07:31.760 16:19:05 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:31.760 16:19:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.760 16:19:05 -- accel/accel.sh@19 -- # IFS=: 00:07:31.760 16:19:05 -- accel/accel.sh@19 -- # read -r var val 00:07:31.760 16:19:05 -- accel/accel.sh@20 -- # val= 00:07:31.760 16:19:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.760 16:19:05 -- accel/accel.sh@19 -- # IFS=: 00:07:31.760 16:19:05 -- accel/accel.sh@19 -- # read -r var val 00:07:31.760 16:19:05 -- accel/accel.sh@20 -- # val=software 00:07:31.760 16:19:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.760 16:19:05 -- accel/accel.sh@22 -- # accel_module=software 00:07:31.760 16:19:05 -- accel/accel.sh@19 -- # IFS=: 00:07:31.760 16:19:05 -- accel/accel.sh@19 -- # read -r var val 00:07:31.760 16:19:05 -- accel/accel.sh@20 -- # val=32 00:07:31.760 16:19:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.760 16:19:05 -- accel/accel.sh@19 -- # IFS=: 00:07:31.760 16:19:05 -- accel/accel.sh@19 -- # read -r var val 00:07:31.760 16:19:05 -- accel/accel.sh@20 -- # val=32 00:07:31.760 16:19:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.760 16:19:05 -- accel/accel.sh@19 -- # IFS=: 00:07:31.760 16:19:05 -- accel/accel.sh@19 -- # read -r var val 00:07:31.760 16:19:05 -- accel/accel.sh@20 -- # val=1 00:07:31.760 16:19:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.760 16:19:05 -- accel/accel.sh@19 -- # IFS=: 00:07:31.760 16:19:05 -- accel/accel.sh@19 -- # read -r var val 00:07:31.760 16:19:05 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:31.760 16:19:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.760 16:19:05 -- accel/accel.sh@19 -- # IFS=: 00:07:31.760 16:19:05 -- accel/accel.sh@19 -- # read -r var val 00:07:31.760 16:19:05 -- accel/accel.sh@20 -- # val=Yes 00:07:31.760 16:19:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.760 16:19:05 -- accel/accel.sh@19 -- # IFS=: 00:07:31.760 16:19:05 -- accel/accel.sh@19 -- # read -r var val 00:07:31.760 16:19:05 -- accel/accel.sh@20 -- # val= 00:07:31.760 16:19:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.760 16:19:05 -- accel/accel.sh@19 -- # IFS=: 00:07:31.760 16:19:05 -- accel/accel.sh@19 -- # read -r var val 00:07:31.760 16:19:05 -- accel/accel.sh@20 -- # val= 00:07:31.760 16:19:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.760 16:19:05 -- accel/accel.sh@19 -- # IFS=: 00:07:31.760 16:19:05 -- accel/accel.sh@19 -- # read -r var val 00:07:33.134 16:19:06 -- accel/accel.sh@20 -- # val= 00:07:33.134 16:19:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.134 16:19:06 -- accel/accel.sh@19 -- # IFS=: 00:07:33.134 16:19:06 -- accel/accel.sh@19 -- # read -r var val 00:07:33.134 16:19:06 -- accel/accel.sh@20 -- # val= 00:07:33.134 16:19:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.134 16:19:06 -- accel/accel.sh@19 -- # IFS=: 00:07:33.134 16:19:06 -- accel/accel.sh@19 -- # read -r var val 00:07:33.134 16:19:06 -- accel/accel.sh@20 -- # val= 00:07:33.134 16:19:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.134 16:19:06 -- accel/accel.sh@19 -- # IFS=: 00:07:33.134 16:19:06 -- accel/accel.sh@19 -- # read -r var val 00:07:33.134 16:19:06 -- accel/accel.sh@20 -- # val= 00:07:33.134 16:19:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.134 16:19:06 -- accel/accel.sh@19 -- # IFS=: 00:07:33.134 16:19:06 -- accel/accel.sh@19 -- # read -r var val 00:07:33.134 16:19:06 -- accel/accel.sh@20 -- # val= 00:07:33.134 16:19:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.134 16:19:06 -- accel/accel.sh@19 -- # IFS=: 00:07:33.134 16:19:06 -- accel/accel.sh@19 -- # read -r var val 00:07:33.134 16:19:06 -- accel/accel.sh@20 -- # val= 00:07:33.134 16:19:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.134 16:19:06 -- accel/accel.sh@19 -- # IFS=: 00:07:33.134 16:19:06 -- accel/accel.sh@19 -- # read -r var val 00:07:33.134 16:19:06 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:33.134 16:19:06 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:33.134 16:19:06 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.134 00:07:33.134 real 0m1.558s 00:07:33.134 user 0m1.339s 00:07:33.134 sys 0m0.120s 00:07:33.134 ************************************ 00:07:33.135 END TEST accel_compare 00:07:33.135 ************************************ 00:07:33.135 16:19:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:33.135 16:19:06 -- common/autotest_common.sh@10 -- # set +x 00:07:33.135 16:19:07 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:33.135 16:19:07 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:33.135 16:19:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:33.135 16:19:07 -- common/autotest_common.sh@10 -- # set +x 00:07:33.135 ************************************ 00:07:33.135 START TEST accel_xor 00:07:33.135 ************************************ 00:07:33.135 16:19:07 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:07:33.135 16:19:07 -- accel/accel.sh@16 -- # local accel_opc 00:07:33.135 16:19:07 -- accel/accel.sh@17 -- # local accel_module 00:07:33.135 16:19:07 -- accel/accel.sh@19 -- # IFS=: 00:07:33.135 16:19:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:33.135 16:19:07 -- accel/accel.sh@19 -- # read -r var val 00:07:33.135 16:19:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:33.135 16:19:07 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.135 16:19:07 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.135 16:19:07 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.135 16:19:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.135 16:19:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.135 16:19:07 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:33.135 16:19:07 -- accel/accel.sh@40 -- # local IFS=, 00:07:33.135 16:19:07 -- accel/accel.sh@41 -- # jq -r . 00:07:33.135 [2024-04-17 16:19:07.128053] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:07:33.135 [2024-04-17 16:19:07.128156] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63727 ] 00:07:33.393 [2024-04-17 16:19:07.261974] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.393 [2024-04-17 16:19:07.393045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.661 16:19:07 -- accel/accel.sh@20 -- # val= 00:07:33.661 16:19:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.661 16:19:07 -- accel/accel.sh@19 -- # IFS=: 00:07:33.661 16:19:07 -- accel/accel.sh@19 -- # read -r var val 00:07:33.661 16:19:07 -- accel/accel.sh@20 -- # val= 00:07:33.661 16:19:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.661 16:19:07 -- accel/accel.sh@19 -- # IFS=: 00:07:33.661 16:19:07 -- accel/accel.sh@19 -- # read -r var val 00:07:33.661 16:19:07 -- accel/accel.sh@20 -- # val=0x1 00:07:33.661 16:19:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.661 16:19:07 -- accel/accel.sh@19 -- # IFS=: 00:07:33.661 16:19:07 -- accel/accel.sh@19 -- # read -r var val 00:07:33.661 16:19:07 -- accel/accel.sh@20 -- # val= 00:07:33.661 16:19:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.661 16:19:07 -- accel/accel.sh@19 -- # IFS=: 00:07:33.661 16:19:07 -- accel/accel.sh@19 -- # read -r var val 00:07:33.661 16:19:07 -- accel/accel.sh@20 -- # val= 00:07:33.661 16:19:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.661 16:19:07 -- accel/accel.sh@19 -- # IFS=: 00:07:33.661 16:19:07 -- accel/accel.sh@19 -- # read -r var val 00:07:33.661 16:19:07 -- accel/accel.sh@20 -- # val=xor 00:07:33.661 16:19:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.661 16:19:07 -- accel/accel.sh@23 -- # accel_opc=xor 00:07:33.661 16:19:07 -- accel/accel.sh@19 -- # IFS=: 00:07:33.661 16:19:07 -- accel/accel.sh@19 -- # read -r var val 00:07:33.661 16:19:07 -- accel/accel.sh@20 -- # val=2 00:07:33.661 16:19:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.661 16:19:07 -- accel/accel.sh@19 -- # IFS=: 00:07:33.661 16:19:07 -- accel/accel.sh@19 -- # read -r var val 00:07:33.661 16:19:07 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:33.661 16:19:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.661 16:19:07 -- accel/accel.sh@19 -- # IFS=: 00:07:33.661 16:19:07 -- accel/accel.sh@19 -- # read -r var val 00:07:33.661 16:19:07 -- accel/accel.sh@20 -- # val= 00:07:33.661 16:19:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.661 16:19:07 -- accel/accel.sh@19 -- # IFS=: 00:07:33.661 16:19:07 -- accel/accel.sh@19 -- # read -r var val 00:07:33.661 16:19:07 -- accel/accel.sh@20 -- # val=software 00:07:33.661 16:19:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.661 16:19:07 -- accel/accel.sh@22 -- # accel_module=software 00:07:33.661 16:19:07 -- accel/accel.sh@19 -- # IFS=: 00:07:33.661 16:19:07 -- accel/accel.sh@19 -- # read -r var val 00:07:33.661 16:19:07 -- accel/accel.sh@20 -- # val=32 00:07:33.661 16:19:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.661 16:19:07 -- accel/accel.sh@19 -- # IFS=: 00:07:33.661 16:19:07 -- accel/accel.sh@19 -- # read -r var val 00:07:33.661 16:19:07 -- accel/accel.sh@20 -- # val=32 00:07:33.661 16:19:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.661 16:19:07 -- accel/accel.sh@19 -- # IFS=: 00:07:33.661 16:19:07 -- accel/accel.sh@19 -- # read -r var val 00:07:33.661 16:19:07 -- accel/accel.sh@20 -- # val=1 00:07:33.661 16:19:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.661 16:19:07 -- accel/accel.sh@19 -- # IFS=: 00:07:33.661 16:19:07 -- accel/accel.sh@19 -- # read -r var val 00:07:33.661 16:19:07 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:33.661 16:19:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.661 16:19:07 -- accel/accel.sh@19 -- # IFS=: 00:07:33.661 16:19:07 -- accel/accel.sh@19 -- # read -r var val 00:07:33.661 16:19:07 -- accel/accel.sh@20 -- # val=Yes 00:07:33.661 16:19:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.661 16:19:07 -- accel/accel.sh@19 -- # IFS=: 00:07:33.661 16:19:07 -- accel/accel.sh@19 -- # read -r var val 00:07:33.661 16:19:07 -- accel/accel.sh@20 -- # val= 00:07:33.661 16:19:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.661 16:19:07 -- accel/accel.sh@19 -- # IFS=: 00:07:33.661 16:19:07 -- accel/accel.sh@19 -- # read -r var val 00:07:33.661 16:19:07 -- accel/accel.sh@20 -- # val= 00:07:33.661 16:19:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.661 16:19:07 -- accel/accel.sh@19 -- # IFS=: 00:07:33.661 16:19:07 -- accel/accel.sh@19 -- # read -r var val 00:07:35.038 16:19:08 -- accel/accel.sh@20 -- # val= 00:07:35.038 16:19:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.038 16:19:08 -- accel/accel.sh@19 -- # IFS=: 00:07:35.038 16:19:08 -- accel/accel.sh@19 -- # read -r var val 00:07:35.038 16:19:08 -- accel/accel.sh@20 -- # val= 00:07:35.038 16:19:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.038 16:19:08 -- accel/accel.sh@19 -- # IFS=: 00:07:35.038 16:19:08 -- accel/accel.sh@19 -- # read -r var val 00:07:35.038 16:19:08 -- accel/accel.sh@20 -- # val= 00:07:35.038 16:19:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.038 16:19:08 -- accel/accel.sh@19 -- # IFS=: 00:07:35.038 16:19:08 -- accel/accel.sh@19 -- # read -r var val 00:07:35.038 16:19:08 -- accel/accel.sh@20 -- # val= 00:07:35.038 16:19:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.038 16:19:08 -- accel/accel.sh@19 -- # IFS=: 00:07:35.038 16:19:08 -- accel/accel.sh@19 -- # read -r var val 00:07:35.038 16:19:08 -- accel/accel.sh@20 -- # val= 00:07:35.038 16:19:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.038 16:19:08 -- accel/accel.sh@19 -- # IFS=: 00:07:35.038 16:19:08 -- accel/accel.sh@19 -- # read -r var val 00:07:35.038 16:19:08 -- accel/accel.sh@20 -- # val= 00:07:35.038 16:19:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.038 16:19:08 -- accel/accel.sh@19 -- # IFS=: 00:07:35.038 16:19:08 -- accel/accel.sh@19 -- # read -r var val 00:07:35.038 16:19:08 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:35.038 16:19:08 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:35.038 16:19:08 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.038 00:07:35.038 real 0m1.663s 00:07:35.038 user 0m1.442s 00:07:35.038 sys 0m0.121s 00:07:35.038 16:19:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:35.038 ************************************ 00:07:35.038 END TEST accel_xor 00:07:35.038 ************************************ 00:07:35.038 16:19:08 -- common/autotest_common.sh@10 -- # set +x 00:07:35.038 16:19:08 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:35.038 16:19:08 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:35.038 16:19:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:35.038 16:19:08 -- common/autotest_common.sh@10 -- # set +x 00:07:35.038 ************************************ 00:07:35.038 START TEST accel_xor 00:07:35.038 ************************************ 00:07:35.038 16:19:08 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:07:35.038 16:19:08 -- accel/accel.sh@16 -- # local accel_opc 00:07:35.038 16:19:08 -- accel/accel.sh@17 -- # local accel_module 00:07:35.038 16:19:08 -- accel/accel.sh@19 -- # IFS=: 00:07:35.038 16:19:08 -- accel/accel.sh@19 -- # read -r var val 00:07:35.038 16:19:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:35.038 16:19:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:35.038 16:19:08 -- accel/accel.sh@12 -- # build_accel_config 00:07:35.038 16:19:08 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.038 16:19:08 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.038 16:19:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.038 16:19:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.038 16:19:08 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.038 16:19:08 -- accel/accel.sh@40 -- # local IFS=, 00:07:35.038 16:19:08 -- accel/accel.sh@41 -- # jq -r . 00:07:35.038 [2024-04-17 16:19:08.974689] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:07:35.038 [2024-04-17 16:19:08.974929] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63771 ] 00:07:35.298 [2024-04-17 16:19:09.119156] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.298 [2024-04-17 16:19:09.288754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.556 16:19:09 -- accel/accel.sh@20 -- # val= 00:07:35.556 16:19:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.556 16:19:09 -- accel/accel.sh@19 -- # IFS=: 00:07:35.556 16:19:09 -- accel/accel.sh@19 -- # read -r var val 00:07:35.556 16:19:09 -- accel/accel.sh@20 -- # val= 00:07:35.556 16:19:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.556 16:19:09 -- accel/accel.sh@19 -- # IFS=: 00:07:35.556 16:19:09 -- accel/accel.sh@19 -- # read -r var val 00:07:35.556 16:19:09 -- accel/accel.sh@20 -- # val=0x1 00:07:35.556 16:19:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.556 16:19:09 -- accel/accel.sh@19 -- # IFS=: 00:07:35.556 16:19:09 -- accel/accel.sh@19 -- # read -r var val 00:07:35.556 16:19:09 -- accel/accel.sh@20 -- # val= 00:07:35.556 16:19:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.556 16:19:09 -- accel/accel.sh@19 -- # IFS=: 00:07:35.556 16:19:09 -- accel/accel.sh@19 -- # read -r var val 00:07:35.556 16:19:09 -- accel/accel.sh@20 -- # val= 00:07:35.556 16:19:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.556 16:19:09 -- accel/accel.sh@19 -- # IFS=: 00:07:35.556 16:19:09 -- accel/accel.sh@19 -- # read -r var val 00:07:35.556 16:19:09 -- accel/accel.sh@20 -- # val=xor 00:07:35.556 16:19:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.556 16:19:09 -- accel/accel.sh@23 -- # accel_opc=xor 00:07:35.556 16:19:09 -- accel/accel.sh@19 -- # IFS=: 00:07:35.556 16:19:09 -- accel/accel.sh@19 -- # read -r var val 00:07:35.556 16:19:09 -- accel/accel.sh@20 -- # val=3 00:07:35.556 16:19:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.556 16:19:09 -- accel/accel.sh@19 -- # IFS=: 00:07:35.556 16:19:09 -- accel/accel.sh@19 -- # read -r var val 00:07:35.556 16:19:09 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:35.556 16:19:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.556 16:19:09 -- accel/accel.sh@19 -- # IFS=: 00:07:35.556 16:19:09 -- accel/accel.sh@19 -- # read -r var val 00:07:35.556 16:19:09 -- accel/accel.sh@20 -- # val= 00:07:35.556 16:19:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.556 16:19:09 -- accel/accel.sh@19 -- # IFS=: 00:07:35.556 16:19:09 -- accel/accel.sh@19 -- # read -r var val 00:07:35.556 16:19:09 -- accel/accel.sh@20 -- # val=software 00:07:35.556 16:19:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.556 16:19:09 -- accel/accel.sh@22 -- # accel_module=software 00:07:35.556 16:19:09 -- accel/accel.sh@19 -- # IFS=: 00:07:35.556 16:19:09 -- accel/accel.sh@19 -- # read -r var val 00:07:35.556 16:19:09 -- accel/accel.sh@20 -- # val=32 00:07:35.556 16:19:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.556 16:19:09 -- accel/accel.sh@19 -- # IFS=: 00:07:35.556 16:19:09 -- accel/accel.sh@19 -- # read -r var val 00:07:35.556 16:19:09 -- accel/accel.sh@20 -- # val=32 00:07:35.556 16:19:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.556 16:19:09 -- accel/accel.sh@19 -- # IFS=: 00:07:35.556 16:19:09 -- accel/accel.sh@19 -- # read -r var val 00:07:35.556 16:19:09 -- accel/accel.sh@20 -- # val=1 00:07:35.556 16:19:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.556 16:19:09 -- accel/accel.sh@19 -- # IFS=: 00:07:35.556 16:19:09 -- accel/accel.sh@19 -- # read -r var val 00:07:35.556 16:19:09 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:35.556 16:19:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.556 16:19:09 -- accel/accel.sh@19 -- # IFS=: 00:07:35.556 16:19:09 -- accel/accel.sh@19 -- # read -r var val 00:07:35.556 16:19:09 -- accel/accel.sh@20 -- # val=Yes 00:07:35.556 16:19:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.556 16:19:09 -- accel/accel.sh@19 -- # IFS=: 00:07:35.556 16:19:09 -- accel/accel.sh@19 -- # read -r var val 00:07:35.556 16:19:09 -- accel/accel.sh@20 -- # val= 00:07:35.556 16:19:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.556 16:19:09 -- accel/accel.sh@19 -- # IFS=: 00:07:35.556 16:19:09 -- accel/accel.sh@19 -- # read -r var val 00:07:35.556 16:19:09 -- accel/accel.sh@20 -- # val= 00:07:35.556 16:19:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.556 16:19:09 -- accel/accel.sh@19 -- # IFS=: 00:07:35.556 16:19:09 -- accel/accel.sh@19 -- # read -r var val 00:07:36.932 16:19:10 -- accel/accel.sh@20 -- # val= 00:07:36.932 16:19:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.932 16:19:10 -- accel/accel.sh@19 -- # IFS=: 00:07:36.932 16:19:10 -- accel/accel.sh@19 -- # read -r var val 00:07:36.932 16:19:10 -- accel/accel.sh@20 -- # val= 00:07:36.932 16:19:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.932 16:19:10 -- accel/accel.sh@19 -- # IFS=: 00:07:36.932 16:19:10 -- accel/accel.sh@19 -- # read -r var val 00:07:36.932 16:19:10 -- accel/accel.sh@20 -- # val= 00:07:36.932 16:19:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.932 16:19:10 -- accel/accel.sh@19 -- # IFS=: 00:07:36.932 16:19:10 -- accel/accel.sh@19 -- # read -r var val 00:07:36.932 16:19:10 -- accel/accel.sh@20 -- # val= 00:07:36.932 16:19:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.932 16:19:10 -- accel/accel.sh@19 -- # IFS=: 00:07:36.932 16:19:10 -- accel/accel.sh@19 -- # read -r var val 00:07:36.932 16:19:10 -- accel/accel.sh@20 -- # val= 00:07:36.932 16:19:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.932 16:19:10 -- accel/accel.sh@19 -- # IFS=: 00:07:36.932 16:19:10 -- accel/accel.sh@19 -- # read -r var val 00:07:36.932 16:19:10 -- accel/accel.sh@20 -- # val= 00:07:36.932 16:19:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.932 16:19:10 -- accel/accel.sh@19 -- # IFS=: 00:07:36.932 16:19:10 -- accel/accel.sh@19 -- # read -r var val 00:07:36.932 16:19:10 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:36.932 16:19:10 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:36.932 16:19:10 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.932 00:07:36.932 real 0m1.622s 00:07:36.932 user 0m1.372s 00:07:36.932 sys 0m0.147s 00:07:36.932 16:19:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:36.932 16:19:10 -- common/autotest_common.sh@10 -- # set +x 00:07:36.932 ************************************ 00:07:36.932 END TEST accel_xor 00:07:36.932 ************************************ 00:07:36.932 16:19:10 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:36.932 16:19:10 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:36.932 16:19:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:36.932 16:19:10 -- common/autotest_common.sh@10 -- # set +x 00:07:36.932 ************************************ 00:07:36.932 START TEST accel_dif_verify 00:07:36.932 ************************************ 00:07:36.932 16:19:10 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:07:36.932 16:19:10 -- accel/accel.sh@16 -- # local accel_opc 00:07:36.932 16:19:10 -- accel/accel.sh@17 -- # local accel_module 00:07:36.932 16:19:10 -- accel/accel.sh@19 -- # IFS=: 00:07:36.932 16:19:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:36.932 16:19:10 -- accel/accel.sh@19 -- # read -r var val 00:07:36.932 16:19:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:36.932 16:19:10 -- accel/accel.sh@12 -- # build_accel_config 00:07:36.932 16:19:10 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:36.932 16:19:10 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:36.932 16:19:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.932 16:19:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.932 16:19:10 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:36.932 16:19:10 -- accel/accel.sh@40 -- # local IFS=, 00:07:36.932 16:19:10 -- accel/accel.sh@41 -- # jq -r . 00:07:36.932 [2024-04-17 16:19:10.703730] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:07:36.933 [2024-04-17 16:19:10.703832] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63810 ] 00:07:36.933 [2024-04-17 16:19:10.840507] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.191 [2024-04-17 16:19:10.976261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.191 16:19:11 -- accel/accel.sh@20 -- # val= 00:07:37.191 16:19:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.191 16:19:11 -- accel/accel.sh@19 -- # IFS=: 00:07:37.191 16:19:11 -- accel/accel.sh@19 -- # read -r var val 00:07:37.191 16:19:11 -- accel/accel.sh@20 -- # val= 00:07:37.191 16:19:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.191 16:19:11 -- accel/accel.sh@19 -- # IFS=: 00:07:37.191 16:19:11 -- accel/accel.sh@19 -- # read -r var val 00:07:37.191 16:19:11 -- accel/accel.sh@20 -- # val=0x1 00:07:37.191 16:19:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.191 16:19:11 -- accel/accel.sh@19 -- # IFS=: 00:07:37.191 16:19:11 -- accel/accel.sh@19 -- # read -r var val 00:07:37.191 16:19:11 -- accel/accel.sh@20 -- # val= 00:07:37.191 16:19:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.191 16:19:11 -- accel/accel.sh@19 -- # IFS=: 00:07:37.191 16:19:11 -- accel/accel.sh@19 -- # read -r var val 00:07:37.191 16:19:11 -- accel/accel.sh@20 -- # val= 00:07:37.191 16:19:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.191 16:19:11 -- accel/accel.sh@19 -- # IFS=: 00:07:37.191 16:19:11 -- accel/accel.sh@19 -- # read -r var val 00:07:37.191 16:19:11 -- accel/accel.sh@20 -- # val=dif_verify 00:07:37.191 16:19:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.191 16:19:11 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:37.191 16:19:11 -- accel/accel.sh@19 -- # IFS=: 00:07:37.191 16:19:11 -- accel/accel.sh@19 -- # read -r var val 00:07:37.191 16:19:11 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:37.191 16:19:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.191 16:19:11 -- accel/accel.sh@19 -- # IFS=: 00:07:37.191 16:19:11 -- accel/accel.sh@19 -- # read -r var val 00:07:37.191 16:19:11 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:37.191 16:19:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.191 16:19:11 -- accel/accel.sh@19 -- # IFS=: 00:07:37.191 16:19:11 -- accel/accel.sh@19 -- # read -r var val 00:07:37.191 16:19:11 -- accel/accel.sh@20 -- # val='512 bytes' 00:07:37.191 16:19:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.191 16:19:11 -- accel/accel.sh@19 -- # IFS=: 00:07:37.191 16:19:11 -- accel/accel.sh@19 -- # read -r var val 00:07:37.191 16:19:11 -- accel/accel.sh@20 -- # val='8 bytes' 00:07:37.191 16:19:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.191 16:19:11 -- accel/accel.sh@19 -- # IFS=: 00:07:37.191 16:19:11 -- accel/accel.sh@19 -- # read -r var val 00:07:37.191 16:19:11 -- accel/accel.sh@20 -- # val= 00:07:37.191 16:19:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.191 16:19:11 -- accel/accel.sh@19 -- # IFS=: 00:07:37.191 16:19:11 -- accel/accel.sh@19 -- # read -r var val 00:07:37.191 16:19:11 -- accel/accel.sh@20 -- # val=software 00:07:37.191 16:19:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.191 16:19:11 -- accel/accel.sh@22 -- # accel_module=software 00:07:37.191 16:19:11 -- accel/accel.sh@19 -- # IFS=: 00:07:37.191 16:19:11 -- accel/accel.sh@19 -- # read -r var val 00:07:37.191 16:19:11 -- accel/accel.sh@20 -- # val=32 00:07:37.191 16:19:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.191 16:19:11 -- accel/accel.sh@19 -- # IFS=: 00:07:37.191 16:19:11 -- accel/accel.sh@19 -- # read -r var val 00:07:37.191 16:19:11 -- accel/accel.sh@20 -- # val=32 00:07:37.191 16:19:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.191 16:19:11 -- accel/accel.sh@19 -- # IFS=: 00:07:37.191 16:19:11 -- accel/accel.sh@19 -- # read -r var val 00:07:37.191 16:19:11 -- accel/accel.sh@20 -- # val=1 00:07:37.191 16:19:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.191 16:19:11 -- accel/accel.sh@19 -- # IFS=: 00:07:37.191 16:19:11 -- accel/accel.sh@19 -- # read -r var val 00:07:37.191 16:19:11 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:37.191 16:19:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.191 16:19:11 -- accel/accel.sh@19 -- # IFS=: 00:07:37.191 16:19:11 -- accel/accel.sh@19 -- # read -r var val 00:07:37.191 16:19:11 -- accel/accel.sh@20 -- # val=No 00:07:37.191 16:19:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.191 16:19:11 -- accel/accel.sh@19 -- # IFS=: 00:07:37.192 16:19:11 -- accel/accel.sh@19 -- # read -r var val 00:07:37.192 16:19:11 -- accel/accel.sh@20 -- # val= 00:07:37.192 16:19:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.192 16:19:11 -- accel/accel.sh@19 -- # IFS=: 00:07:37.192 16:19:11 -- accel/accel.sh@19 -- # read -r var val 00:07:37.192 16:19:11 -- accel/accel.sh@20 -- # val= 00:07:37.192 16:19:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.192 16:19:11 -- accel/accel.sh@19 -- # IFS=: 00:07:37.192 16:19:11 -- accel/accel.sh@19 -- # read -r var val 00:07:38.591 16:19:12 -- accel/accel.sh@20 -- # val= 00:07:38.591 16:19:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.591 16:19:12 -- accel/accel.sh@19 -- # IFS=: 00:07:38.591 16:19:12 -- accel/accel.sh@19 -- # read -r var val 00:07:38.591 16:19:12 -- accel/accel.sh@20 -- # val= 00:07:38.591 16:19:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.591 16:19:12 -- accel/accel.sh@19 -- # IFS=: 00:07:38.591 16:19:12 -- accel/accel.sh@19 -- # read -r var val 00:07:38.591 16:19:12 -- accel/accel.sh@20 -- # val= 00:07:38.591 16:19:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.591 16:19:12 -- accel/accel.sh@19 -- # IFS=: 00:07:38.591 16:19:12 -- accel/accel.sh@19 -- # read -r var val 00:07:38.591 16:19:12 -- accel/accel.sh@20 -- # val= 00:07:38.591 16:19:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.591 16:19:12 -- accel/accel.sh@19 -- # IFS=: 00:07:38.591 16:19:12 -- accel/accel.sh@19 -- # read -r var val 00:07:38.591 16:19:12 -- accel/accel.sh@20 -- # val= 00:07:38.591 16:19:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.591 16:19:12 -- accel/accel.sh@19 -- # IFS=: 00:07:38.591 16:19:12 -- accel/accel.sh@19 -- # read -r var val 00:07:38.591 16:19:12 -- accel/accel.sh@20 -- # val= 00:07:38.591 16:19:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.591 16:19:12 -- accel/accel.sh@19 -- # IFS=: 00:07:38.591 16:19:12 -- accel/accel.sh@19 -- # read -r var val 00:07:38.591 16:19:12 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:38.591 16:19:12 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:38.591 16:19:12 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.591 00:07:38.591 real 0m1.555s 00:07:38.591 user 0m1.334s 00:07:38.591 sys 0m0.125s 00:07:38.591 ************************************ 00:07:38.591 END TEST accel_dif_verify 00:07:38.591 ************************************ 00:07:38.591 16:19:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:38.591 16:19:12 -- common/autotest_common.sh@10 -- # set +x 00:07:38.591 16:19:12 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:38.591 16:19:12 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:38.591 16:19:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:38.591 16:19:12 -- common/autotest_common.sh@10 -- # set +x 00:07:38.591 ************************************ 00:07:38.591 START TEST accel_dif_generate 00:07:38.591 ************************************ 00:07:38.591 16:19:12 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:07:38.591 16:19:12 -- accel/accel.sh@16 -- # local accel_opc 00:07:38.591 16:19:12 -- accel/accel.sh@17 -- # local accel_module 00:07:38.591 16:19:12 -- accel/accel.sh@19 -- # IFS=: 00:07:38.591 16:19:12 -- accel/accel.sh@19 -- # read -r var val 00:07:38.591 16:19:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:38.591 16:19:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:38.591 16:19:12 -- accel/accel.sh@12 -- # build_accel_config 00:07:38.591 16:19:12 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.591 16:19:12 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.591 16:19:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.591 16:19:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.591 16:19:12 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.591 16:19:12 -- accel/accel.sh@40 -- # local IFS=, 00:07:38.591 16:19:12 -- accel/accel.sh@41 -- # jq -r . 00:07:38.591 [2024-04-17 16:19:12.376664] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:07:38.591 [2024-04-17 16:19:12.376752] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63848 ] 00:07:38.591 [2024-04-17 16:19:12.512105] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.850 [2024-04-17 16:19:12.646434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.850 16:19:12 -- accel/accel.sh@20 -- # val= 00:07:38.850 16:19:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.850 16:19:12 -- accel/accel.sh@19 -- # IFS=: 00:07:38.850 16:19:12 -- accel/accel.sh@19 -- # read -r var val 00:07:38.850 16:19:12 -- accel/accel.sh@20 -- # val= 00:07:38.850 16:19:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.850 16:19:12 -- accel/accel.sh@19 -- # IFS=: 00:07:38.850 16:19:12 -- accel/accel.sh@19 -- # read -r var val 00:07:38.850 16:19:12 -- accel/accel.sh@20 -- # val=0x1 00:07:38.850 16:19:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.850 16:19:12 -- accel/accel.sh@19 -- # IFS=: 00:07:38.850 16:19:12 -- accel/accel.sh@19 -- # read -r var val 00:07:38.850 16:19:12 -- accel/accel.sh@20 -- # val= 00:07:38.850 16:19:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.850 16:19:12 -- accel/accel.sh@19 -- # IFS=: 00:07:38.850 16:19:12 -- accel/accel.sh@19 -- # read -r var val 00:07:38.850 16:19:12 -- accel/accel.sh@20 -- # val= 00:07:38.850 16:19:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.850 16:19:12 -- accel/accel.sh@19 -- # IFS=: 00:07:38.850 16:19:12 -- accel/accel.sh@19 -- # read -r var val 00:07:38.850 16:19:12 -- accel/accel.sh@20 -- # val=dif_generate 00:07:38.850 16:19:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.850 16:19:12 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:38.850 16:19:12 -- accel/accel.sh@19 -- # IFS=: 00:07:38.850 16:19:12 -- accel/accel.sh@19 -- # read -r var val 00:07:38.850 16:19:12 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:38.850 16:19:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.850 16:19:12 -- accel/accel.sh@19 -- # IFS=: 00:07:38.850 16:19:12 -- accel/accel.sh@19 -- # read -r var val 00:07:38.850 16:19:12 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:38.850 16:19:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.850 16:19:12 -- accel/accel.sh@19 -- # IFS=: 00:07:38.850 16:19:12 -- accel/accel.sh@19 -- # read -r var val 00:07:38.850 16:19:12 -- accel/accel.sh@20 -- # val='512 bytes' 00:07:38.850 16:19:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.850 16:19:12 -- accel/accel.sh@19 -- # IFS=: 00:07:38.850 16:19:12 -- accel/accel.sh@19 -- # read -r var val 00:07:38.850 16:19:12 -- accel/accel.sh@20 -- # val='8 bytes' 00:07:38.850 16:19:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.850 16:19:12 -- accel/accel.sh@19 -- # IFS=: 00:07:38.850 16:19:12 -- accel/accel.sh@19 -- # read -r var val 00:07:38.850 16:19:12 -- accel/accel.sh@20 -- # val= 00:07:38.850 16:19:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.850 16:19:12 -- accel/accel.sh@19 -- # IFS=: 00:07:38.850 16:19:12 -- accel/accel.sh@19 -- # read -r var val 00:07:38.850 16:19:12 -- accel/accel.sh@20 -- # val=software 00:07:38.850 16:19:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.850 16:19:12 -- accel/accel.sh@22 -- # accel_module=software 00:07:38.850 16:19:12 -- accel/accel.sh@19 -- # IFS=: 00:07:38.851 16:19:12 -- accel/accel.sh@19 -- # read -r var val 00:07:38.851 16:19:12 -- accel/accel.sh@20 -- # val=32 00:07:38.851 16:19:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.851 16:19:12 -- accel/accel.sh@19 -- # IFS=: 00:07:38.851 16:19:12 -- accel/accel.sh@19 -- # read -r var val 00:07:38.851 16:19:12 -- accel/accel.sh@20 -- # val=32 00:07:38.851 16:19:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.851 16:19:12 -- accel/accel.sh@19 -- # IFS=: 00:07:38.851 16:19:12 -- accel/accel.sh@19 -- # read -r var val 00:07:38.851 16:19:12 -- accel/accel.sh@20 -- # val=1 00:07:38.851 16:19:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.851 16:19:12 -- accel/accel.sh@19 -- # IFS=: 00:07:38.851 16:19:12 -- accel/accel.sh@19 -- # read -r var val 00:07:38.851 16:19:12 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:38.851 16:19:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.851 16:19:12 -- accel/accel.sh@19 -- # IFS=: 00:07:38.851 16:19:12 -- accel/accel.sh@19 -- # read -r var val 00:07:38.851 16:19:12 -- accel/accel.sh@20 -- # val=No 00:07:38.851 16:19:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.851 16:19:12 -- accel/accel.sh@19 -- # IFS=: 00:07:38.851 16:19:12 -- accel/accel.sh@19 -- # read -r var val 00:07:38.851 16:19:12 -- accel/accel.sh@20 -- # val= 00:07:38.851 16:19:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.851 16:19:12 -- accel/accel.sh@19 -- # IFS=: 00:07:38.851 16:19:12 -- accel/accel.sh@19 -- # read -r var val 00:07:38.851 16:19:12 -- accel/accel.sh@20 -- # val= 00:07:38.851 16:19:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.851 16:19:12 -- accel/accel.sh@19 -- # IFS=: 00:07:38.851 16:19:12 -- accel/accel.sh@19 -- # read -r var val 00:07:40.225 16:19:13 -- accel/accel.sh@20 -- # val= 00:07:40.225 16:19:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.225 16:19:13 -- accel/accel.sh@19 -- # IFS=: 00:07:40.225 16:19:13 -- accel/accel.sh@19 -- # read -r var val 00:07:40.225 16:19:13 -- accel/accel.sh@20 -- # val= 00:07:40.225 16:19:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.225 16:19:13 -- accel/accel.sh@19 -- # IFS=: 00:07:40.225 16:19:13 -- accel/accel.sh@19 -- # read -r var val 00:07:40.225 16:19:13 -- accel/accel.sh@20 -- # val= 00:07:40.225 16:19:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.225 16:19:13 -- accel/accel.sh@19 -- # IFS=: 00:07:40.225 16:19:13 -- accel/accel.sh@19 -- # read -r var val 00:07:40.225 16:19:13 -- accel/accel.sh@20 -- # val= 00:07:40.225 16:19:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.225 16:19:13 -- accel/accel.sh@19 -- # IFS=: 00:07:40.225 16:19:13 -- accel/accel.sh@19 -- # read -r var val 00:07:40.225 16:19:13 -- accel/accel.sh@20 -- # val= 00:07:40.225 16:19:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.225 16:19:13 -- accel/accel.sh@19 -- # IFS=: 00:07:40.225 16:19:13 -- accel/accel.sh@19 -- # read -r var val 00:07:40.225 ************************************ 00:07:40.225 END TEST accel_dif_generate 00:07:40.226 ************************************ 00:07:40.226 16:19:13 -- accel/accel.sh@20 -- # val= 00:07:40.226 16:19:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.226 16:19:13 -- accel/accel.sh@19 -- # IFS=: 00:07:40.226 16:19:13 -- accel/accel.sh@19 -- # read -r var val 00:07:40.226 16:19:13 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:40.226 16:19:13 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:40.226 16:19:13 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.226 00:07:40.226 real 0m1.554s 00:07:40.226 user 0m1.342s 00:07:40.226 sys 0m0.115s 00:07:40.226 16:19:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:40.226 16:19:13 -- common/autotest_common.sh@10 -- # set +x 00:07:40.226 16:19:13 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:40.226 16:19:13 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:40.226 16:19:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:40.226 16:19:13 -- common/autotest_common.sh@10 -- # set +x 00:07:40.226 ************************************ 00:07:40.226 START TEST accel_dif_generate_copy 00:07:40.226 ************************************ 00:07:40.226 16:19:14 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:07:40.226 16:19:14 -- accel/accel.sh@16 -- # local accel_opc 00:07:40.226 16:19:14 -- accel/accel.sh@17 -- # local accel_module 00:07:40.226 16:19:14 -- accel/accel.sh@19 -- # IFS=: 00:07:40.226 16:19:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:40.226 16:19:14 -- accel/accel.sh@19 -- # read -r var val 00:07:40.226 16:19:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:40.226 16:19:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:40.226 16:19:14 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:40.226 16:19:14 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:40.226 16:19:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.226 16:19:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.226 16:19:14 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:40.226 16:19:14 -- accel/accel.sh@40 -- # local IFS=, 00:07:40.226 16:19:14 -- accel/accel.sh@41 -- # jq -r . 00:07:40.226 [2024-04-17 16:19:14.037695] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:07:40.226 [2024-04-17 16:19:14.037860] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63892 ] 00:07:40.226 [2024-04-17 16:19:14.176036] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.484 [2024-04-17 16:19:14.298312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.484 16:19:14 -- accel/accel.sh@20 -- # val= 00:07:40.484 16:19:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.484 16:19:14 -- accel/accel.sh@19 -- # IFS=: 00:07:40.484 16:19:14 -- accel/accel.sh@19 -- # read -r var val 00:07:40.484 16:19:14 -- accel/accel.sh@20 -- # val= 00:07:40.484 16:19:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.484 16:19:14 -- accel/accel.sh@19 -- # IFS=: 00:07:40.484 16:19:14 -- accel/accel.sh@19 -- # read -r var val 00:07:40.484 16:19:14 -- accel/accel.sh@20 -- # val=0x1 00:07:40.484 16:19:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.484 16:19:14 -- accel/accel.sh@19 -- # IFS=: 00:07:40.484 16:19:14 -- accel/accel.sh@19 -- # read -r var val 00:07:40.484 16:19:14 -- accel/accel.sh@20 -- # val= 00:07:40.484 16:19:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.484 16:19:14 -- accel/accel.sh@19 -- # IFS=: 00:07:40.484 16:19:14 -- accel/accel.sh@19 -- # read -r var val 00:07:40.484 16:19:14 -- accel/accel.sh@20 -- # val= 00:07:40.484 16:19:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.484 16:19:14 -- accel/accel.sh@19 -- # IFS=: 00:07:40.484 16:19:14 -- accel/accel.sh@19 -- # read -r var val 00:07:40.484 16:19:14 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:40.484 16:19:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.484 16:19:14 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:40.484 16:19:14 -- accel/accel.sh@19 -- # IFS=: 00:07:40.484 16:19:14 -- accel/accel.sh@19 -- # read -r var val 00:07:40.484 16:19:14 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:40.484 16:19:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.484 16:19:14 -- accel/accel.sh@19 -- # IFS=: 00:07:40.484 16:19:14 -- accel/accel.sh@19 -- # read -r var val 00:07:40.484 16:19:14 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:40.484 16:19:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.484 16:19:14 -- accel/accel.sh@19 -- # IFS=: 00:07:40.484 16:19:14 -- accel/accel.sh@19 -- # read -r var val 00:07:40.484 16:19:14 -- accel/accel.sh@20 -- # val= 00:07:40.484 16:19:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.484 16:19:14 -- accel/accel.sh@19 -- # IFS=: 00:07:40.484 16:19:14 -- accel/accel.sh@19 -- # read -r var val 00:07:40.484 16:19:14 -- accel/accel.sh@20 -- # val=software 00:07:40.484 16:19:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.484 16:19:14 -- accel/accel.sh@22 -- # accel_module=software 00:07:40.484 16:19:14 -- accel/accel.sh@19 -- # IFS=: 00:07:40.484 16:19:14 -- accel/accel.sh@19 -- # read -r var val 00:07:40.484 16:19:14 -- accel/accel.sh@20 -- # val=32 00:07:40.484 16:19:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.484 16:19:14 -- accel/accel.sh@19 -- # IFS=: 00:07:40.484 16:19:14 -- accel/accel.sh@19 -- # read -r var val 00:07:40.484 16:19:14 -- accel/accel.sh@20 -- # val=32 00:07:40.484 16:19:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.484 16:19:14 -- accel/accel.sh@19 -- # IFS=: 00:07:40.484 16:19:14 -- accel/accel.sh@19 -- # read -r var val 00:07:40.484 16:19:14 -- accel/accel.sh@20 -- # val=1 00:07:40.484 16:19:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.484 16:19:14 -- accel/accel.sh@19 -- # IFS=: 00:07:40.484 16:19:14 -- accel/accel.sh@19 -- # read -r var val 00:07:40.484 16:19:14 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:40.484 16:19:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.484 16:19:14 -- accel/accel.sh@19 -- # IFS=: 00:07:40.484 16:19:14 -- accel/accel.sh@19 -- # read -r var val 00:07:40.484 16:19:14 -- accel/accel.sh@20 -- # val=No 00:07:40.484 16:19:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.484 16:19:14 -- accel/accel.sh@19 -- # IFS=: 00:07:40.484 16:19:14 -- accel/accel.sh@19 -- # read -r var val 00:07:40.484 16:19:14 -- accel/accel.sh@20 -- # val= 00:07:40.484 16:19:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.484 16:19:14 -- accel/accel.sh@19 -- # IFS=: 00:07:40.484 16:19:14 -- accel/accel.sh@19 -- # read -r var val 00:07:40.484 16:19:14 -- accel/accel.sh@20 -- # val= 00:07:40.484 16:19:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.484 16:19:14 -- accel/accel.sh@19 -- # IFS=: 00:07:40.484 16:19:14 -- accel/accel.sh@19 -- # read -r var val 00:07:41.861 16:19:15 -- accel/accel.sh@20 -- # val= 00:07:41.861 16:19:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.861 16:19:15 -- accel/accel.sh@19 -- # IFS=: 00:07:41.861 16:19:15 -- accel/accel.sh@19 -- # read -r var val 00:07:41.861 16:19:15 -- accel/accel.sh@20 -- # val= 00:07:41.861 16:19:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.861 16:19:15 -- accel/accel.sh@19 -- # IFS=: 00:07:41.861 16:19:15 -- accel/accel.sh@19 -- # read -r var val 00:07:41.861 16:19:15 -- accel/accel.sh@20 -- # val= 00:07:41.861 16:19:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.861 16:19:15 -- accel/accel.sh@19 -- # IFS=: 00:07:41.861 16:19:15 -- accel/accel.sh@19 -- # read -r var val 00:07:41.861 16:19:15 -- accel/accel.sh@20 -- # val= 00:07:41.861 16:19:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.861 16:19:15 -- accel/accel.sh@19 -- # IFS=: 00:07:41.861 16:19:15 -- accel/accel.sh@19 -- # read -r var val 00:07:41.861 16:19:15 -- accel/accel.sh@20 -- # val= 00:07:41.861 16:19:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.861 16:19:15 -- accel/accel.sh@19 -- # IFS=: 00:07:41.861 16:19:15 -- accel/accel.sh@19 -- # read -r var val 00:07:41.861 ************************************ 00:07:41.861 END TEST accel_dif_generate_copy 00:07:41.861 ************************************ 00:07:41.861 16:19:15 -- accel/accel.sh@20 -- # val= 00:07:41.861 16:19:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.861 16:19:15 -- accel/accel.sh@19 -- # IFS=: 00:07:41.861 16:19:15 -- accel/accel.sh@19 -- # read -r var val 00:07:41.861 16:19:15 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:41.861 16:19:15 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:41.861 16:19:15 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.861 00:07:41.861 real 0m1.550s 00:07:41.861 user 0m1.331s 00:07:41.861 sys 0m0.117s 00:07:41.861 16:19:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:41.861 16:19:15 -- common/autotest_common.sh@10 -- # set +x 00:07:41.861 16:19:15 -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:41.861 16:19:15 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:41.861 16:19:15 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:41.861 16:19:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:41.861 16:19:15 -- common/autotest_common.sh@10 -- # set +x 00:07:41.861 ************************************ 00:07:41.861 START TEST accel_comp 00:07:41.861 ************************************ 00:07:41.861 16:19:15 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:41.861 16:19:15 -- accel/accel.sh@16 -- # local accel_opc 00:07:41.861 16:19:15 -- accel/accel.sh@17 -- # local accel_module 00:07:41.861 16:19:15 -- accel/accel.sh@19 -- # IFS=: 00:07:41.861 16:19:15 -- accel/accel.sh@19 -- # read -r var val 00:07:41.861 16:19:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:41.861 16:19:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:41.861 16:19:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:41.861 16:19:15 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:41.861 16:19:15 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:41.861 16:19:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.861 16:19:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.861 16:19:15 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:41.861 16:19:15 -- accel/accel.sh@40 -- # local IFS=, 00:07:41.861 16:19:15 -- accel/accel.sh@41 -- # jq -r . 00:07:41.861 [2024-04-17 16:19:15.697376] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:07:41.861 [2024-04-17 16:19:15.697501] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63927 ] 00:07:41.861 [2024-04-17 16:19:15.834715] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.120 [2024-04-17 16:19:15.956615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.120 16:19:16 -- accel/accel.sh@20 -- # val= 00:07:42.120 16:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.120 16:19:16 -- accel/accel.sh@19 -- # IFS=: 00:07:42.120 16:19:16 -- accel/accel.sh@19 -- # read -r var val 00:07:42.120 16:19:16 -- accel/accel.sh@20 -- # val= 00:07:42.120 16:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.120 16:19:16 -- accel/accel.sh@19 -- # IFS=: 00:07:42.120 16:19:16 -- accel/accel.sh@19 -- # read -r var val 00:07:42.120 16:19:16 -- accel/accel.sh@20 -- # val= 00:07:42.120 16:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.120 16:19:16 -- accel/accel.sh@19 -- # IFS=: 00:07:42.120 16:19:16 -- accel/accel.sh@19 -- # read -r var val 00:07:42.120 16:19:16 -- accel/accel.sh@20 -- # val=0x1 00:07:42.120 16:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.120 16:19:16 -- accel/accel.sh@19 -- # IFS=: 00:07:42.120 16:19:16 -- accel/accel.sh@19 -- # read -r var val 00:07:42.120 16:19:16 -- accel/accel.sh@20 -- # val= 00:07:42.120 16:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.120 16:19:16 -- accel/accel.sh@19 -- # IFS=: 00:07:42.120 16:19:16 -- accel/accel.sh@19 -- # read -r var val 00:07:42.120 16:19:16 -- accel/accel.sh@20 -- # val= 00:07:42.120 16:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.120 16:19:16 -- accel/accel.sh@19 -- # IFS=: 00:07:42.120 16:19:16 -- accel/accel.sh@19 -- # read -r var val 00:07:42.120 16:19:16 -- accel/accel.sh@20 -- # val=compress 00:07:42.120 16:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.120 16:19:16 -- accel/accel.sh@23 -- # accel_opc=compress 00:07:42.120 16:19:16 -- accel/accel.sh@19 -- # IFS=: 00:07:42.120 16:19:16 -- accel/accel.sh@19 -- # read -r var val 00:07:42.120 16:19:16 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:42.120 16:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.120 16:19:16 -- accel/accel.sh@19 -- # IFS=: 00:07:42.120 16:19:16 -- accel/accel.sh@19 -- # read -r var val 00:07:42.120 16:19:16 -- accel/accel.sh@20 -- # val= 00:07:42.120 16:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.120 16:19:16 -- accel/accel.sh@19 -- # IFS=: 00:07:42.120 16:19:16 -- accel/accel.sh@19 -- # read -r var val 00:07:42.120 16:19:16 -- accel/accel.sh@20 -- # val=software 00:07:42.120 16:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.120 16:19:16 -- accel/accel.sh@22 -- # accel_module=software 00:07:42.120 16:19:16 -- accel/accel.sh@19 -- # IFS=: 00:07:42.120 16:19:16 -- accel/accel.sh@19 -- # read -r var val 00:07:42.120 16:19:16 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:42.120 16:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.120 16:19:16 -- accel/accel.sh@19 -- # IFS=: 00:07:42.120 16:19:16 -- accel/accel.sh@19 -- # read -r var val 00:07:42.120 16:19:16 -- accel/accel.sh@20 -- # val=32 00:07:42.120 16:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.120 16:19:16 -- accel/accel.sh@19 -- # IFS=: 00:07:42.120 16:19:16 -- accel/accel.sh@19 -- # read -r var val 00:07:42.120 16:19:16 -- accel/accel.sh@20 -- # val=32 00:07:42.120 16:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.120 16:19:16 -- accel/accel.sh@19 -- # IFS=: 00:07:42.120 16:19:16 -- accel/accel.sh@19 -- # read -r var val 00:07:42.120 16:19:16 -- accel/accel.sh@20 -- # val=1 00:07:42.120 16:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.120 16:19:16 -- accel/accel.sh@19 -- # IFS=: 00:07:42.120 16:19:16 -- accel/accel.sh@19 -- # read -r var val 00:07:42.120 16:19:16 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:42.120 16:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.120 16:19:16 -- accel/accel.sh@19 -- # IFS=: 00:07:42.120 16:19:16 -- accel/accel.sh@19 -- # read -r var val 00:07:42.120 16:19:16 -- accel/accel.sh@20 -- # val=No 00:07:42.120 16:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.120 16:19:16 -- accel/accel.sh@19 -- # IFS=: 00:07:42.120 16:19:16 -- accel/accel.sh@19 -- # read -r var val 00:07:42.120 16:19:16 -- accel/accel.sh@20 -- # val= 00:07:42.120 16:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.120 16:19:16 -- accel/accel.sh@19 -- # IFS=: 00:07:42.120 16:19:16 -- accel/accel.sh@19 -- # read -r var val 00:07:42.120 16:19:16 -- accel/accel.sh@20 -- # val= 00:07:42.120 16:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.120 16:19:16 -- accel/accel.sh@19 -- # IFS=: 00:07:42.120 16:19:16 -- accel/accel.sh@19 -- # read -r var val 00:07:43.525 16:19:17 -- accel/accel.sh@20 -- # val= 00:07:43.525 16:19:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.525 16:19:17 -- accel/accel.sh@19 -- # IFS=: 00:07:43.525 16:19:17 -- accel/accel.sh@19 -- # read -r var val 00:07:43.525 16:19:17 -- accel/accel.sh@20 -- # val= 00:07:43.525 16:19:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.525 16:19:17 -- accel/accel.sh@19 -- # IFS=: 00:07:43.525 16:19:17 -- accel/accel.sh@19 -- # read -r var val 00:07:43.525 16:19:17 -- accel/accel.sh@20 -- # val= 00:07:43.525 16:19:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.525 16:19:17 -- accel/accel.sh@19 -- # IFS=: 00:07:43.525 16:19:17 -- accel/accel.sh@19 -- # read -r var val 00:07:43.525 16:19:17 -- accel/accel.sh@20 -- # val= 00:07:43.525 ************************************ 00:07:43.525 END TEST accel_comp 00:07:43.525 ************************************ 00:07:43.525 16:19:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.525 16:19:17 -- accel/accel.sh@19 -- # IFS=: 00:07:43.525 16:19:17 -- accel/accel.sh@19 -- # read -r var val 00:07:43.525 16:19:17 -- accel/accel.sh@20 -- # val= 00:07:43.525 16:19:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.525 16:19:17 -- accel/accel.sh@19 -- # IFS=: 00:07:43.525 16:19:17 -- accel/accel.sh@19 -- # read -r var val 00:07:43.525 16:19:17 -- accel/accel.sh@20 -- # val= 00:07:43.525 16:19:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.525 16:19:17 -- accel/accel.sh@19 -- # IFS=: 00:07:43.525 16:19:17 -- accel/accel.sh@19 -- # read -r var val 00:07:43.525 16:19:17 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:43.525 16:19:17 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:43.525 16:19:17 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.525 00:07:43.525 real 0m1.545s 00:07:43.525 user 0m1.330s 00:07:43.525 sys 0m0.117s 00:07:43.525 16:19:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:43.525 16:19:17 -- common/autotest_common.sh@10 -- # set +x 00:07:43.525 16:19:17 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:43.525 16:19:17 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:43.525 16:19:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:43.525 16:19:17 -- common/autotest_common.sh@10 -- # set +x 00:07:43.525 ************************************ 00:07:43.525 START TEST accel_decomp 00:07:43.525 ************************************ 00:07:43.525 16:19:17 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:43.525 16:19:17 -- accel/accel.sh@16 -- # local accel_opc 00:07:43.525 16:19:17 -- accel/accel.sh@17 -- # local accel_module 00:07:43.525 16:19:17 -- accel/accel.sh@19 -- # IFS=: 00:07:43.525 16:19:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:43.525 16:19:17 -- accel/accel.sh@19 -- # read -r var val 00:07:43.525 16:19:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:43.525 16:19:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:43.525 16:19:17 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:43.525 16:19:17 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:43.525 16:19:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.525 16:19:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.525 16:19:17 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:43.525 16:19:17 -- accel/accel.sh@40 -- # local IFS=, 00:07:43.525 16:19:17 -- accel/accel.sh@41 -- # jq -r . 00:07:43.525 [2024-04-17 16:19:17.361524] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:07:43.525 [2024-04-17 16:19:17.361632] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63972 ] 00:07:43.525 [2024-04-17 16:19:17.500620] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.784 [2024-04-17 16:19:17.631417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.784 16:19:17 -- accel/accel.sh@20 -- # val= 00:07:43.784 16:19:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.784 16:19:17 -- accel/accel.sh@19 -- # IFS=: 00:07:43.784 16:19:17 -- accel/accel.sh@19 -- # read -r var val 00:07:43.784 16:19:17 -- accel/accel.sh@20 -- # val= 00:07:43.784 16:19:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.784 16:19:17 -- accel/accel.sh@19 -- # IFS=: 00:07:43.784 16:19:17 -- accel/accel.sh@19 -- # read -r var val 00:07:43.784 16:19:17 -- accel/accel.sh@20 -- # val= 00:07:43.784 16:19:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.784 16:19:17 -- accel/accel.sh@19 -- # IFS=: 00:07:43.784 16:19:17 -- accel/accel.sh@19 -- # read -r var val 00:07:43.784 16:19:17 -- accel/accel.sh@20 -- # val=0x1 00:07:43.784 16:19:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.784 16:19:17 -- accel/accel.sh@19 -- # IFS=: 00:07:43.784 16:19:17 -- accel/accel.sh@19 -- # read -r var val 00:07:43.784 16:19:17 -- accel/accel.sh@20 -- # val= 00:07:43.784 16:19:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.784 16:19:17 -- accel/accel.sh@19 -- # IFS=: 00:07:43.784 16:19:17 -- accel/accel.sh@19 -- # read -r var val 00:07:43.784 16:19:17 -- accel/accel.sh@20 -- # val= 00:07:43.784 16:19:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.784 16:19:17 -- accel/accel.sh@19 -- # IFS=: 00:07:43.784 16:19:17 -- accel/accel.sh@19 -- # read -r var val 00:07:43.784 16:19:17 -- accel/accel.sh@20 -- # val=decompress 00:07:43.784 16:19:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.784 16:19:17 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:43.784 16:19:17 -- accel/accel.sh@19 -- # IFS=: 00:07:43.784 16:19:17 -- accel/accel.sh@19 -- # read -r var val 00:07:43.784 16:19:17 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:43.784 16:19:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.784 16:19:17 -- accel/accel.sh@19 -- # IFS=: 00:07:43.784 16:19:17 -- accel/accel.sh@19 -- # read -r var val 00:07:43.784 16:19:17 -- accel/accel.sh@20 -- # val= 00:07:43.784 16:19:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.784 16:19:17 -- accel/accel.sh@19 -- # IFS=: 00:07:43.784 16:19:17 -- accel/accel.sh@19 -- # read -r var val 00:07:43.784 16:19:17 -- accel/accel.sh@20 -- # val=software 00:07:43.784 16:19:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.784 16:19:17 -- accel/accel.sh@22 -- # accel_module=software 00:07:43.784 16:19:17 -- accel/accel.sh@19 -- # IFS=: 00:07:43.784 16:19:17 -- accel/accel.sh@19 -- # read -r var val 00:07:43.784 16:19:17 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:43.784 16:19:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.784 16:19:17 -- accel/accel.sh@19 -- # IFS=: 00:07:43.784 16:19:17 -- accel/accel.sh@19 -- # read -r var val 00:07:43.784 16:19:17 -- accel/accel.sh@20 -- # val=32 00:07:43.784 16:19:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.784 16:19:17 -- accel/accel.sh@19 -- # IFS=: 00:07:43.784 16:19:17 -- accel/accel.sh@19 -- # read -r var val 00:07:43.784 16:19:17 -- accel/accel.sh@20 -- # val=32 00:07:43.784 16:19:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.784 16:19:17 -- accel/accel.sh@19 -- # IFS=: 00:07:43.784 16:19:17 -- accel/accel.sh@19 -- # read -r var val 00:07:43.784 16:19:17 -- accel/accel.sh@20 -- # val=1 00:07:43.784 16:19:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.784 16:19:17 -- accel/accel.sh@19 -- # IFS=: 00:07:43.784 16:19:17 -- accel/accel.sh@19 -- # read -r var val 00:07:43.784 16:19:17 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:43.784 16:19:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.784 16:19:17 -- accel/accel.sh@19 -- # IFS=: 00:07:43.784 16:19:17 -- accel/accel.sh@19 -- # read -r var val 00:07:43.784 16:19:17 -- accel/accel.sh@20 -- # val=Yes 00:07:43.784 16:19:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.784 16:19:17 -- accel/accel.sh@19 -- # IFS=: 00:07:43.784 16:19:17 -- accel/accel.sh@19 -- # read -r var val 00:07:43.784 16:19:17 -- accel/accel.sh@20 -- # val= 00:07:43.784 16:19:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.784 16:19:17 -- accel/accel.sh@19 -- # IFS=: 00:07:43.784 16:19:17 -- accel/accel.sh@19 -- # read -r var val 00:07:43.784 16:19:17 -- accel/accel.sh@20 -- # val= 00:07:43.784 16:19:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.784 16:19:17 -- accel/accel.sh@19 -- # IFS=: 00:07:43.784 16:19:17 -- accel/accel.sh@19 -- # read -r var val 00:07:45.160 16:19:18 -- accel/accel.sh@20 -- # val= 00:07:45.160 16:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.160 16:19:18 -- accel/accel.sh@19 -- # IFS=: 00:07:45.160 16:19:18 -- accel/accel.sh@19 -- # read -r var val 00:07:45.160 16:19:18 -- accel/accel.sh@20 -- # val= 00:07:45.160 16:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.160 16:19:18 -- accel/accel.sh@19 -- # IFS=: 00:07:45.160 16:19:18 -- accel/accel.sh@19 -- # read -r var val 00:07:45.160 16:19:18 -- accel/accel.sh@20 -- # val= 00:07:45.160 16:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.160 16:19:18 -- accel/accel.sh@19 -- # IFS=: 00:07:45.160 16:19:18 -- accel/accel.sh@19 -- # read -r var val 00:07:45.160 16:19:18 -- accel/accel.sh@20 -- # val= 00:07:45.160 16:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.160 16:19:18 -- accel/accel.sh@19 -- # IFS=: 00:07:45.160 16:19:18 -- accel/accel.sh@19 -- # read -r var val 00:07:45.160 16:19:18 -- accel/accel.sh@20 -- # val= 00:07:45.160 16:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.160 16:19:18 -- accel/accel.sh@19 -- # IFS=: 00:07:45.160 16:19:18 -- accel/accel.sh@19 -- # read -r var val 00:07:45.160 16:19:18 -- accel/accel.sh@20 -- # val= 00:07:45.160 16:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.160 16:19:18 -- accel/accel.sh@19 -- # IFS=: 00:07:45.160 16:19:18 -- accel/accel.sh@19 -- # read -r var val 00:07:45.160 16:19:18 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:45.160 ************************************ 00:07:45.160 END TEST accel_decomp 00:07:45.160 ************************************ 00:07:45.160 16:19:18 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:45.160 16:19:18 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:45.160 00:07:45.160 real 0m1.554s 00:07:45.160 user 0m1.346s 00:07:45.160 sys 0m0.115s 00:07:45.160 16:19:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:45.160 16:19:18 -- common/autotest_common.sh@10 -- # set +x 00:07:45.160 16:19:18 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:45.160 16:19:18 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:45.160 16:19:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:45.160 16:19:18 -- common/autotest_common.sh@10 -- # set +x 00:07:45.160 ************************************ 00:07:45.160 START TEST accel_decmop_full 00:07:45.160 ************************************ 00:07:45.160 16:19:19 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:45.160 16:19:19 -- accel/accel.sh@16 -- # local accel_opc 00:07:45.160 16:19:19 -- accel/accel.sh@17 -- # local accel_module 00:07:45.160 16:19:19 -- accel/accel.sh@19 -- # IFS=: 00:07:45.160 16:19:19 -- accel/accel.sh@19 -- # read -r var val 00:07:45.160 16:19:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:45.160 16:19:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:45.160 16:19:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:45.160 16:19:19 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:45.160 16:19:19 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:45.160 16:19:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.160 16:19:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.160 16:19:19 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:45.160 16:19:19 -- accel/accel.sh@40 -- # local IFS=, 00:07:45.160 16:19:19 -- accel/accel.sh@41 -- # jq -r . 00:07:45.160 [2024-04-17 16:19:19.043092] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:07:45.160 [2024-04-17 16:19:19.043197] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64012 ] 00:07:45.160 [2024-04-17 16:19:19.182450] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.418 [2024-04-17 16:19:19.314309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.418 16:19:19 -- accel/accel.sh@20 -- # val= 00:07:45.418 16:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.418 16:19:19 -- accel/accel.sh@19 -- # IFS=: 00:07:45.418 16:19:19 -- accel/accel.sh@19 -- # read -r var val 00:07:45.418 16:19:19 -- accel/accel.sh@20 -- # val= 00:07:45.418 16:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.418 16:19:19 -- accel/accel.sh@19 -- # IFS=: 00:07:45.418 16:19:19 -- accel/accel.sh@19 -- # read -r var val 00:07:45.418 16:19:19 -- accel/accel.sh@20 -- # val= 00:07:45.418 16:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.418 16:19:19 -- accel/accel.sh@19 -- # IFS=: 00:07:45.418 16:19:19 -- accel/accel.sh@19 -- # read -r var val 00:07:45.418 16:19:19 -- accel/accel.sh@20 -- # val=0x1 00:07:45.418 16:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.418 16:19:19 -- accel/accel.sh@19 -- # IFS=: 00:07:45.418 16:19:19 -- accel/accel.sh@19 -- # read -r var val 00:07:45.418 16:19:19 -- accel/accel.sh@20 -- # val= 00:07:45.418 16:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.418 16:19:19 -- accel/accel.sh@19 -- # IFS=: 00:07:45.418 16:19:19 -- accel/accel.sh@19 -- # read -r var val 00:07:45.418 16:19:19 -- accel/accel.sh@20 -- # val= 00:07:45.418 16:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.418 16:19:19 -- accel/accel.sh@19 -- # IFS=: 00:07:45.418 16:19:19 -- accel/accel.sh@19 -- # read -r var val 00:07:45.418 16:19:19 -- accel/accel.sh@20 -- # val=decompress 00:07:45.418 16:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.418 16:19:19 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:45.418 16:19:19 -- accel/accel.sh@19 -- # IFS=: 00:07:45.418 16:19:19 -- accel/accel.sh@19 -- # read -r var val 00:07:45.418 16:19:19 -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:45.418 16:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.418 16:19:19 -- accel/accel.sh@19 -- # IFS=: 00:07:45.418 16:19:19 -- accel/accel.sh@19 -- # read -r var val 00:07:45.418 16:19:19 -- accel/accel.sh@20 -- # val= 00:07:45.418 16:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.418 16:19:19 -- accel/accel.sh@19 -- # IFS=: 00:07:45.418 16:19:19 -- accel/accel.sh@19 -- # read -r var val 00:07:45.418 16:19:19 -- accel/accel.sh@20 -- # val=software 00:07:45.418 16:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.418 16:19:19 -- accel/accel.sh@22 -- # accel_module=software 00:07:45.418 16:19:19 -- accel/accel.sh@19 -- # IFS=: 00:07:45.418 16:19:19 -- accel/accel.sh@19 -- # read -r var val 00:07:45.418 16:19:19 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:45.418 16:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.418 16:19:19 -- accel/accel.sh@19 -- # IFS=: 00:07:45.418 16:19:19 -- accel/accel.sh@19 -- # read -r var val 00:07:45.418 16:19:19 -- accel/accel.sh@20 -- # val=32 00:07:45.418 16:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.418 16:19:19 -- accel/accel.sh@19 -- # IFS=: 00:07:45.418 16:19:19 -- accel/accel.sh@19 -- # read -r var val 00:07:45.418 16:19:19 -- accel/accel.sh@20 -- # val=32 00:07:45.419 16:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.419 16:19:19 -- accel/accel.sh@19 -- # IFS=: 00:07:45.419 16:19:19 -- accel/accel.sh@19 -- # read -r var val 00:07:45.419 16:19:19 -- accel/accel.sh@20 -- # val=1 00:07:45.419 16:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.419 16:19:19 -- accel/accel.sh@19 -- # IFS=: 00:07:45.419 16:19:19 -- accel/accel.sh@19 -- # read -r var val 00:07:45.419 16:19:19 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:45.419 16:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.419 16:19:19 -- accel/accel.sh@19 -- # IFS=: 00:07:45.419 16:19:19 -- accel/accel.sh@19 -- # read -r var val 00:07:45.419 16:19:19 -- accel/accel.sh@20 -- # val=Yes 00:07:45.419 16:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.419 16:19:19 -- accel/accel.sh@19 -- # IFS=: 00:07:45.419 16:19:19 -- accel/accel.sh@19 -- # read -r var val 00:07:45.419 16:19:19 -- accel/accel.sh@20 -- # val= 00:07:45.419 16:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.419 16:19:19 -- accel/accel.sh@19 -- # IFS=: 00:07:45.419 16:19:19 -- accel/accel.sh@19 -- # read -r var val 00:07:45.419 16:19:19 -- accel/accel.sh@20 -- # val= 00:07:45.419 16:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.419 16:19:19 -- accel/accel.sh@19 -- # IFS=: 00:07:45.419 16:19:19 -- accel/accel.sh@19 -- # read -r var val 00:07:46.794 16:19:20 -- accel/accel.sh@20 -- # val= 00:07:46.794 16:19:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.794 16:19:20 -- accel/accel.sh@19 -- # IFS=: 00:07:46.794 16:19:20 -- accel/accel.sh@19 -- # read -r var val 00:07:46.794 16:19:20 -- accel/accel.sh@20 -- # val= 00:07:46.794 16:19:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.794 16:19:20 -- accel/accel.sh@19 -- # IFS=: 00:07:46.794 16:19:20 -- accel/accel.sh@19 -- # read -r var val 00:07:46.794 16:19:20 -- accel/accel.sh@20 -- # val= 00:07:46.794 16:19:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.794 16:19:20 -- accel/accel.sh@19 -- # IFS=: 00:07:46.794 16:19:20 -- accel/accel.sh@19 -- # read -r var val 00:07:46.794 16:19:20 -- accel/accel.sh@20 -- # val= 00:07:46.794 16:19:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.794 16:19:20 -- accel/accel.sh@19 -- # IFS=: 00:07:46.794 16:19:20 -- accel/accel.sh@19 -- # read -r var val 00:07:46.794 16:19:20 -- accel/accel.sh@20 -- # val= 00:07:46.794 16:19:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.794 16:19:20 -- accel/accel.sh@19 -- # IFS=: 00:07:46.794 16:19:20 -- accel/accel.sh@19 -- # read -r var val 00:07:46.794 16:19:20 -- accel/accel.sh@20 -- # val= 00:07:46.794 16:19:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.794 16:19:20 -- accel/accel.sh@19 -- # IFS=: 00:07:46.794 16:19:20 -- accel/accel.sh@19 -- # read -r var val 00:07:46.794 16:19:20 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:46.794 16:19:20 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:46.794 16:19:20 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:46.794 00:07:46.794 real 0m1.569s 00:07:46.794 user 0m1.356s 00:07:46.794 sys 0m0.118s 00:07:46.794 16:19:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:46.794 16:19:20 -- common/autotest_common.sh@10 -- # set +x 00:07:46.794 ************************************ 00:07:46.794 END TEST accel_decmop_full 00:07:46.794 ************************************ 00:07:46.794 16:19:20 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:46.794 16:19:20 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:46.794 16:19:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:46.794 16:19:20 -- common/autotest_common.sh@10 -- # set +x 00:07:46.794 ************************************ 00:07:46.794 START TEST accel_decomp_mcore 00:07:46.794 ************************************ 00:07:46.794 16:19:20 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:46.794 16:19:20 -- accel/accel.sh@16 -- # local accel_opc 00:07:46.794 16:19:20 -- accel/accel.sh@17 -- # local accel_module 00:07:46.794 16:19:20 -- accel/accel.sh@19 -- # IFS=: 00:07:46.794 16:19:20 -- accel/accel.sh@19 -- # read -r var val 00:07:46.794 16:19:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:46.794 16:19:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:46.794 16:19:20 -- accel/accel.sh@12 -- # build_accel_config 00:07:46.794 16:19:20 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:46.794 16:19:20 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:46.794 16:19:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.794 16:19:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.794 16:19:20 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:46.794 16:19:20 -- accel/accel.sh@40 -- # local IFS=, 00:07:46.794 16:19:20 -- accel/accel.sh@41 -- # jq -r . 00:07:46.794 [2024-04-17 16:19:20.734302] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:07:46.794 [2024-04-17 16:19:20.734404] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64050 ] 00:07:47.052 [2024-04-17 16:19:20.870038] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:47.052 [2024-04-17 16:19:20.995226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.053 [2024-04-17 16:19:20.995359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.053 [2024-04-17 16:19:20.995490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.053 [2024-04-17 16:19:20.995490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:47.053 16:19:21 -- accel/accel.sh@20 -- # val= 00:07:47.053 16:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.053 16:19:21 -- accel/accel.sh@19 -- # IFS=: 00:07:47.053 16:19:21 -- accel/accel.sh@19 -- # read -r var val 00:07:47.053 16:19:21 -- accel/accel.sh@20 -- # val= 00:07:47.053 16:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.053 16:19:21 -- accel/accel.sh@19 -- # IFS=: 00:07:47.053 16:19:21 -- accel/accel.sh@19 -- # read -r var val 00:07:47.053 16:19:21 -- accel/accel.sh@20 -- # val= 00:07:47.053 16:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.053 16:19:21 -- accel/accel.sh@19 -- # IFS=: 00:07:47.053 16:19:21 -- accel/accel.sh@19 -- # read -r var val 00:07:47.053 16:19:21 -- accel/accel.sh@20 -- # val=0xf 00:07:47.053 16:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.053 16:19:21 -- accel/accel.sh@19 -- # IFS=: 00:07:47.053 16:19:21 -- accel/accel.sh@19 -- # read -r var val 00:07:47.053 16:19:21 -- accel/accel.sh@20 -- # val= 00:07:47.053 16:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.053 16:19:21 -- accel/accel.sh@19 -- # IFS=: 00:07:47.053 16:19:21 -- accel/accel.sh@19 -- # read -r var val 00:07:47.053 16:19:21 -- accel/accel.sh@20 -- # val= 00:07:47.053 16:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.053 16:19:21 -- accel/accel.sh@19 -- # IFS=: 00:07:47.053 16:19:21 -- accel/accel.sh@19 -- # read -r var val 00:07:47.053 16:19:21 -- accel/accel.sh@20 -- # val=decompress 00:07:47.053 16:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.053 16:19:21 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:47.053 16:19:21 -- accel/accel.sh@19 -- # IFS=: 00:07:47.053 16:19:21 -- accel/accel.sh@19 -- # read -r var val 00:07:47.053 16:19:21 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:47.053 16:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.053 16:19:21 -- accel/accel.sh@19 -- # IFS=: 00:07:47.053 16:19:21 -- accel/accel.sh@19 -- # read -r var val 00:07:47.053 16:19:21 -- accel/accel.sh@20 -- # val= 00:07:47.053 16:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.053 16:19:21 -- accel/accel.sh@19 -- # IFS=: 00:07:47.053 16:19:21 -- accel/accel.sh@19 -- # read -r var val 00:07:47.053 16:19:21 -- accel/accel.sh@20 -- # val=software 00:07:47.053 16:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.053 16:19:21 -- accel/accel.sh@22 -- # accel_module=software 00:07:47.053 16:19:21 -- accel/accel.sh@19 -- # IFS=: 00:07:47.053 16:19:21 -- accel/accel.sh@19 -- # read -r var val 00:07:47.053 16:19:21 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:47.053 16:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.053 16:19:21 -- accel/accel.sh@19 -- # IFS=: 00:07:47.053 16:19:21 -- accel/accel.sh@19 -- # read -r var val 00:07:47.053 16:19:21 -- accel/accel.sh@20 -- # val=32 00:07:47.053 16:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.053 16:19:21 -- accel/accel.sh@19 -- # IFS=: 00:07:47.053 16:19:21 -- accel/accel.sh@19 -- # read -r var val 00:07:47.053 16:19:21 -- accel/accel.sh@20 -- # val=32 00:07:47.053 16:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.053 16:19:21 -- accel/accel.sh@19 -- # IFS=: 00:07:47.053 16:19:21 -- accel/accel.sh@19 -- # read -r var val 00:07:47.053 16:19:21 -- accel/accel.sh@20 -- # val=1 00:07:47.053 16:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.053 16:19:21 -- accel/accel.sh@19 -- # IFS=: 00:07:47.053 16:19:21 -- accel/accel.sh@19 -- # read -r var val 00:07:47.053 16:19:21 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:47.053 16:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.053 16:19:21 -- accel/accel.sh@19 -- # IFS=: 00:07:47.053 16:19:21 -- accel/accel.sh@19 -- # read -r var val 00:07:47.053 16:19:21 -- accel/accel.sh@20 -- # val=Yes 00:07:47.053 16:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.053 16:19:21 -- accel/accel.sh@19 -- # IFS=: 00:07:47.053 16:19:21 -- accel/accel.sh@19 -- # read -r var val 00:07:47.053 16:19:21 -- accel/accel.sh@20 -- # val= 00:07:47.053 16:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.053 16:19:21 -- accel/accel.sh@19 -- # IFS=: 00:07:47.053 16:19:21 -- accel/accel.sh@19 -- # read -r var val 00:07:47.053 16:19:21 -- accel/accel.sh@20 -- # val= 00:07:47.053 16:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:47.053 16:19:21 -- accel/accel.sh@19 -- # IFS=: 00:07:47.053 16:19:21 -- accel/accel.sh@19 -- # read -r var val 00:07:48.429 16:19:22 -- accel/accel.sh@20 -- # val= 00:07:48.429 16:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.429 16:19:22 -- accel/accel.sh@19 -- # IFS=: 00:07:48.429 16:19:22 -- accel/accel.sh@19 -- # read -r var val 00:07:48.429 16:19:22 -- accel/accel.sh@20 -- # val= 00:07:48.429 16:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.429 16:19:22 -- accel/accel.sh@19 -- # IFS=: 00:07:48.429 16:19:22 -- accel/accel.sh@19 -- # read -r var val 00:07:48.429 16:19:22 -- accel/accel.sh@20 -- # val= 00:07:48.429 16:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.429 16:19:22 -- accel/accel.sh@19 -- # IFS=: 00:07:48.429 16:19:22 -- accel/accel.sh@19 -- # read -r var val 00:07:48.429 16:19:22 -- accel/accel.sh@20 -- # val= 00:07:48.429 16:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.430 16:19:22 -- accel/accel.sh@19 -- # IFS=: 00:07:48.430 16:19:22 -- accel/accel.sh@19 -- # read -r var val 00:07:48.430 16:19:22 -- accel/accel.sh@20 -- # val= 00:07:48.430 16:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.430 16:19:22 -- accel/accel.sh@19 -- # IFS=: 00:07:48.430 16:19:22 -- accel/accel.sh@19 -- # read -r var val 00:07:48.430 16:19:22 -- accel/accel.sh@20 -- # val= 00:07:48.430 16:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.430 16:19:22 -- accel/accel.sh@19 -- # IFS=: 00:07:48.430 16:19:22 -- accel/accel.sh@19 -- # read -r var val 00:07:48.430 16:19:22 -- accel/accel.sh@20 -- # val= 00:07:48.430 ************************************ 00:07:48.430 END TEST accel_decomp_mcore 00:07:48.430 ************************************ 00:07:48.430 16:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.430 16:19:22 -- accel/accel.sh@19 -- # IFS=: 00:07:48.430 16:19:22 -- accel/accel.sh@19 -- # read -r var val 00:07:48.430 16:19:22 -- accel/accel.sh@20 -- # val= 00:07:48.430 16:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.430 16:19:22 -- accel/accel.sh@19 -- # IFS=: 00:07:48.430 16:19:22 -- accel/accel.sh@19 -- # read -r var val 00:07:48.430 16:19:22 -- accel/accel.sh@20 -- # val= 00:07:48.430 16:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.430 16:19:22 -- accel/accel.sh@19 -- # IFS=: 00:07:48.430 16:19:22 -- accel/accel.sh@19 -- # read -r var val 00:07:48.430 16:19:22 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:48.430 16:19:22 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:48.430 16:19:22 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:48.430 00:07:48.430 real 0m1.561s 00:07:48.430 user 0m4.732s 00:07:48.430 sys 0m0.140s 00:07:48.430 16:19:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:48.430 16:19:22 -- common/autotest_common.sh@10 -- # set +x 00:07:48.430 16:19:22 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:48.430 16:19:22 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:48.430 16:19:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:48.430 16:19:22 -- common/autotest_common.sh@10 -- # set +x 00:07:48.430 ************************************ 00:07:48.430 START TEST accel_decomp_full_mcore 00:07:48.430 ************************************ 00:07:48.430 16:19:22 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:48.430 16:19:22 -- accel/accel.sh@16 -- # local accel_opc 00:07:48.430 16:19:22 -- accel/accel.sh@17 -- # local accel_module 00:07:48.430 16:19:22 -- accel/accel.sh@19 -- # IFS=: 00:07:48.430 16:19:22 -- accel/accel.sh@19 -- # read -r var val 00:07:48.430 16:19:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:48.430 16:19:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:48.430 16:19:22 -- accel/accel.sh@12 -- # build_accel_config 00:07:48.430 16:19:22 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:48.430 16:19:22 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:48.430 16:19:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:48.430 16:19:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:48.430 16:19:22 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:48.430 16:19:22 -- accel/accel.sh@40 -- # local IFS=, 00:07:48.430 16:19:22 -- accel/accel.sh@41 -- # jq -r . 00:07:48.430 [2024-04-17 16:19:22.407862] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:07:48.430 [2024-04-17 16:19:22.407956] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64097 ] 00:07:48.688 [2024-04-17 16:19:22.543154] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:48.688 [2024-04-17 16:19:22.681844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.688 [2024-04-17 16:19:22.681971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.688 [2024-04-17 16:19:22.682114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:48.688 [2024-04-17 16:19:22.682119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.947 16:19:22 -- accel/accel.sh@20 -- # val= 00:07:48.947 16:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.947 16:19:22 -- accel/accel.sh@19 -- # IFS=: 00:07:48.947 16:19:22 -- accel/accel.sh@19 -- # read -r var val 00:07:48.947 16:19:22 -- accel/accel.sh@20 -- # val= 00:07:48.947 16:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.947 16:19:22 -- accel/accel.sh@19 -- # IFS=: 00:07:48.947 16:19:22 -- accel/accel.sh@19 -- # read -r var val 00:07:48.947 16:19:22 -- accel/accel.sh@20 -- # val= 00:07:48.947 16:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.947 16:19:22 -- accel/accel.sh@19 -- # IFS=: 00:07:48.947 16:19:22 -- accel/accel.sh@19 -- # read -r var val 00:07:48.947 16:19:22 -- accel/accel.sh@20 -- # val=0xf 00:07:48.947 16:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.947 16:19:22 -- accel/accel.sh@19 -- # IFS=: 00:07:48.947 16:19:22 -- accel/accel.sh@19 -- # read -r var val 00:07:48.947 16:19:22 -- accel/accel.sh@20 -- # val= 00:07:48.947 16:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.947 16:19:22 -- accel/accel.sh@19 -- # IFS=: 00:07:48.947 16:19:22 -- accel/accel.sh@19 -- # read -r var val 00:07:48.947 16:19:22 -- accel/accel.sh@20 -- # val= 00:07:48.947 16:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.947 16:19:22 -- accel/accel.sh@19 -- # IFS=: 00:07:48.947 16:19:22 -- accel/accel.sh@19 -- # read -r var val 00:07:48.947 16:19:22 -- accel/accel.sh@20 -- # val=decompress 00:07:48.947 16:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.947 16:19:22 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:48.947 16:19:22 -- accel/accel.sh@19 -- # IFS=: 00:07:48.947 16:19:22 -- accel/accel.sh@19 -- # read -r var val 00:07:48.947 16:19:22 -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:48.947 16:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.947 16:19:22 -- accel/accel.sh@19 -- # IFS=: 00:07:48.947 16:19:22 -- accel/accel.sh@19 -- # read -r var val 00:07:48.947 16:19:22 -- accel/accel.sh@20 -- # val= 00:07:48.947 16:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.947 16:19:22 -- accel/accel.sh@19 -- # IFS=: 00:07:48.947 16:19:22 -- accel/accel.sh@19 -- # read -r var val 00:07:48.947 16:19:22 -- accel/accel.sh@20 -- # val=software 00:07:48.947 16:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.947 16:19:22 -- accel/accel.sh@22 -- # accel_module=software 00:07:48.947 16:19:22 -- accel/accel.sh@19 -- # IFS=: 00:07:48.947 16:19:22 -- accel/accel.sh@19 -- # read -r var val 00:07:48.947 16:19:22 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:48.947 16:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.947 16:19:22 -- accel/accel.sh@19 -- # IFS=: 00:07:48.947 16:19:22 -- accel/accel.sh@19 -- # read -r var val 00:07:48.947 16:19:22 -- accel/accel.sh@20 -- # val=32 00:07:48.947 16:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.947 16:19:22 -- accel/accel.sh@19 -- # IFS=: 00:07:48.947 16:19:22 -- accel/accel.sh@19 -- # read -r var val 00:07:48.947 16:19:22 -- accel/accel.sh@20 -- # val=32 00:07:48.947 16:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.947 16:19:22 -- accel/accel.sh@19 -- # IFS=: 00:07:48.947 16:19:22 -- accel/accel.sh@19 -- # read -r var val 00:07:48.947 16:19:22 -- accel/accel.sh@20 -- # val=1 00:07:48.947 16:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.947 16:19:22 -- accel/accel.sh@19 -- # IFS=: 00:07:48.947 16:19:22 -- accel/accel.sh@19 -- # read -r var val 00:07:48.947 16:19:22 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:48.947 16:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.947 16:19:22 -- accel/accel.sh@19 -- # IFS=: 00:07:48.947 16:19:22 -- accel/accel.sh@19 -- # read -r var val 00:07:48.947 16:19:22 -- accel/accel.sh@20 -- # val=Yes 00:07:48.947 16:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.947 16:19:22 -- accel/accel.sh@19 -- # IFS=: 00:07:48.947 16:19:22 -- accel/accel.sh@19 -- # read -r var val 00:07:48.947 16:19:22 -- accel/accel.sh@20 -- # val= 00:07:48.947 16:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.947 16:19:22 -- accel/accel.sh@19 -- # IFS=: 00:07:48.947 16:19:22 -- accel/accel.sh@19 -- # read -r var val 00:07:48.947 16:19:22 -- accel/accel.sh@20 -- # val= 00:07:48.947 16:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.947 16:19:22 -- accel/accel.sh@19 -- # IFS=: 00:07:48.947 16:19:22 -- accel/accel.sh@19 -- # read -r var val 00:07:50.323 16:19:23 -- accel/accel.sh@20 -- # val= 00:07:50.323 16:19:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.323 16:19:23 -- accel/accel.sh@19 -- # IFS=: 00:07:50.323 16:19:23 -- accel/accel.sh@19 -- # read -r var val 00:07:50.323 16:19:23 -- accel/accel.sh@20 -- # val= 00:07:50.323 16:19:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.323 16:19:23 -- accel/accel.sh@19 -- # IFS=: 00:07:50.323 16:19:23 -- accel/accel.sh@19 -- # read -r var val 00:07:50.323 16:19:23 -- accel/accel.sh@20 -- # val= 00:07:50.323 16:19:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.323 16:19:23 -- accel/accel.sh@19 -- # IFS=: 00:07:50.323 16:19:23 -- accel/accel.sh@19 -- # read -r var val 00:07:50.323 16:19:23 -- accel/accel.sh@20 -- # val= 00:07:50.323 16:19:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.323 16:19:23 -- accel/accel.sh@19 -- # IFS=: 00:07:50.323 16:19:23 -- accel/accel.sh@19 -- # read -r var val 00:07:50.323 16:19:23 -- accel/accel.sh@20 -- # val= 00:07:50.323 16:19:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.323 16:19:23 -- accel/accel.sh@19 -- # IFS=: 00:07:50.323 16:19:23 -- accel/accel.sh@19 -- # read -r var val 00:07:50.323 16:19:23 -- accel/accel.sh@20 -- # val= 00:07:50.323 16:19:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.323 16:19:23 -- accel/accel.sh@19 -- # IFS=: 00:07:50.323 16:19:23 -- accel/accel.sh@19 -- # read -r var val 00:07:50.323 16:19:23 -- accel/accel.sh@20 -- # val= 00:07:50.323 16:19:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.323 16:19:23 -- accel/accel.sh@19 -- # IFS=: 00:07:50.323 16:19:23 -- accel/accel.sh@19 -- # read -r var val 00:07:50.323 16:19:23 -- accel/accel.sh@20 -- # val= 00:07:50.323 16:19:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.323 16:19:23 -- accel/accel.sh@19 -- # IFS=: 00:07:50.323 16:19:23 -- accel/accel.sh@19 -- # read -r var val 00:07:50.323 16:19:23 -- accel/accel.sh@20 -- # val= 00:07:50.323 16:19:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.323 16:19:23 -- accel/accel.sh@19 -- # IFS=: 00:07:50.323 16:19:23 -- accel/accel.sh@19 -- # read -r var val 00:07:50.323 16:19:23 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:50.323 16:19:23 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:50.323 16:19:23 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:50.323 00:07:50.323 real 0m1.592s 00:07:50.323 user 0m4.803s 00:07:50.323 sys 0m0.141s 00:07:50.323 16:19:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:50.323 16:19:23 -- common/autotest_common.sh@10 -- # set +x 00:07:50.323 ************************************ 00:07:50.323 END TEST accel_decomp_full_mcore 00:07:50.323 ************************************ 00:07:50.323 16:19:24 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:50.323 16:19:24 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:50.323 16:19:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:50.323 16:19:24 -- common/autotest_common.sh@10 -- # set +x 00:07:50.323 ************************************ 00:07:50.323 START TEST accel_decomp_mthread 00:07:50.323 ************************************ 00:07:50.323 16:19:24 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:50.323 16:19:24 -- accel/accel.sh@16 -- # local accel_opc 00:07:50.323 16:19:24 -- accel/accel.sh@17 -- # local accel_module 00:07:50.323 16:19:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:50.323 16:19:24 -- accel/accel.sh@19 -- # IFS=: 00:07:50.323 16:19:24 -- accel/accel.sh@19 -- # read -r var val 00:07:50.323 16:19:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:50.323 16:19:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:50.323 16:19:24 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:50.323 16:19:24 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:50.323 16:19:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.323 16:19:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.323 16:19:24 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:50.323 16:19:24 -- accel/accel.sh@40 -- # local IFS=, 00:07:50.323 16:19:24 -- accel/accel.sh@41 -- # jq -r . 00:07:50.323 [2024-04-17 16:19:24.112662] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:07:50.323 [2024-04-17 16:19:24.112796] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64140 ] 00:07:50.323 [2024-04-17 16:19:24.247725] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.581 [2024-04-17 16:19:24.376713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.581 16:19:24 -- accel/accel.sh@20 -- # val= 00:07:50.581 16:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.581 16:19:24 -- accel/accel.sh@19 -- # IFS=: 00:07:50.581 16:19:24 -- accel/accel.sh@19 -- # read -r var val 00:07:50.581 16:19:24 -- accel/accel.sh@20 -- # val= 00:07:50.581 16:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.581 16:19:24 -- accel/accel.sh@19 -- # IFS=: 00:07:50.581 16:19:24 -- accel/accel.sh@19 -- # read -r var val 00:07:50.581 16:19:24 -- accel/accel.sh@20 -- # val= 00:07:50.581 16:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.581 16:19:24 -- accel/accel.sh@19 -- # IFS=: 00:07:50.581 16:19:24 -- accel/accel.sh@19 -- # read -r var val 00:07:50.581 16:19:24 -- accel/accel.sh@20 -- # val=0x1 00:07:50.581 16:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.581 16:19:24 -- accel/accel.sh@19 -- # IFS=: 00:07:50.581 16:19:24 -- accel/accel.sh@19 -- # read -r var val 00:07:50.581 16:19:24 -- accel/accel.sh@20 -- # val= 00:07:50.581 16:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.581 16:19:24 -- accel/accel.sh@19 -- # IFS=: 00:07:50.581 16:19:24 -- accel/accel.sh@19 -- # read -r var val 00:07:50.581 16:19:24 -- accel/accel.sh@20 -- # val= 00:07:50.581 16:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.581 16:19:24 -- accel/accel.sh@19 -- # IFS=: 00:07:50.581 16:19:24 -- accel/accel.sh@19 -- # read -r var val 00:07:50.581 16:19:24 -- accel/accel.sh@20 -- # val=decompress 00:07:50.581 16:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.582 16:19:24 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:50.582 16:19:24 -- accel/accel.sh@19 -- # IFS=: 00:07:50.582 16:19:24 -- accel/accel.sh@19 -- # read -r var val 00:07:50.582 16:19:24 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:50.582 16:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.582 16:19:24 -- accel/accel.sh@19 -- # IFS=: 00:07:50.582 16:19:24 -- accel/accel.sh@19 -- # read -r var val 00:07:50.582 16:19:24 -- accel/accel.sh@20 -- # val= 00:07:50.582 16:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.582 16:19:24 -- accel/accel.sh@19 -- # IFS=: 00:07:50.582 16:19:24 -- accel/accel.sh@19 -- # read -r var val 00:07:50.582 16:19:24 -- accel/accel.sh@20 -- # val=software 00:07:50.582 16:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.582 16:19:24 -- accel/accel.sh@22 -- # accel_module=software 00:07:50.582 16:19:24 -- accel/accel.sh@19 -- # IFS=: 00:07:50.582 16:19:24 -- accel/accel.sh@19 -- # read -r var val 00:07:50.582 16:19:24 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:50.582 16:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.582 16:19:24 -- accel/accel.sh@19 -- # IFS=: 00:07:50.582 16:19:24 -- accel/accel.sh@19 -- # read -r var val 00:07:50.582 16:19:24 -- accel/accel.sh@20 -- # val=32 00:07:50.582 16:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.582 16:19:24 -- accel/accel.sh@19 -- # IFS=: 00:07:50.582 16:19:24 -- accel/accel.sh@19 -- # read -r var val 00:07:50.582 16:19:24 -- accel/accel.sh@20 -- # val=32 00:07:50.582 16:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.582 16:19:24 -- accel/accel.sh@19 -- # IFS=: 00:07:50.582 16:19:24 -- accel/accel.sh@19 -- # read -r var val 00:07:50.582 16:19:24 -- accel/accel.sh@20 -- # val=2 00:07:50.582 16:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.582 16:19:24 -- accel/accel.sh@19 -- # IFS=: 00:07:50.582 16:19:24 -- accel/accel.sh@19 -- # read -r var val 00:07:50.582 16:19:24 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:50.582 16:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.582 16:19:24 -- accel/accel.sh@19 -- # IFS=: 00:07:50.582 16:19:24 -- accel/accel.sh@19 -- # read -r var val 00:07:50.582 16:19:24 -- accel/accel.sh@20 -- # val=Yes 00:07:50.582 16:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.582 16:19:24 -- accel/accel.sh@19 -- # IFS=: 00:07:50.582 16:19:24 -- accel/accel.sh@19 -- # read -r var val 00:07:50.582 16:19:24 -- accel/accel.sh@20 -- # val= 00:07:50.582 16:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.582 16:19:24 -- accel/accel.sh@19 -- # IFS=: 00:07:50.582 16:19:24 -- accel/accel.sh@19 -- # read -r var val 00:07:50.582 16:19:24 -- accel/accel.sh@20 -- # val= 00:07:50.582 16:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.582 16:19:24 -- accel/accel.sh@19 -- # IFS=: 00:07:50.582 16:19:24 -- accel/accel.sh@19 -- # read -r var val 00:07:51.958 16:19:25 -- accel/accel.sh@20 -- # val= 00:07:51.958 16:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.958 16:19:25 -- accel/accel.sh@19 -- # IFS=: 00:07:51.958 16:19:25 -- accel/accel.sh@19 -- # read -r var val 00:07:51.958 16:19:25 -- accel/accel.sh@20 -- # val= 00:07:51.958 16:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.958 16:19:25 -- accel/accel.sh@19 -- # IFS=: 00:07:51.958 16:19:25 -- accel/accel.sh@19 -- # read -r var val 00:07:51.958 16:19:25 -- accel/accel.sh@20 -- # val= 00:07:51.958 16:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.958 16:19:25 -- accel/accel.sh@19 -- # IFS=: 00:07:51.958 16:19:25 -- accel/accel.sh@19 -- # read -r var val 00:07:51.958 16:19:25 -- accel/accel.sh@20 -- # val= 00:07:51.958 16:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.958 16:19:25 -- accel/accel.sh@19 -- # IFS=: 00:07:51.958 16:19:25 -- accel/accel.sh@19 -- # read -r var val 00:07:51.958 16:19:25 -- accel/accel.sh@20 -- # val= 00:07:51.958 16:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.958 16:19:25 -- accel/accel.sh@19 -- # IFS=: 00:07:51.958 16:19:25 -- accel/accel.sh@19 -- # read -r var val 00:07:51.958 16:19:25 -- accel/accel.sh@20 -- # val= 00:07:51.958 16:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.958 16:19:25 -- accel/accel.sh@19 -- # IFS=: 00:07:51.958 16:19:25 -- accel/accel.sh@19 -- # read -r var val 00:07:51.958 16:19:25 -- accel/accel.sh@20 -- # val= 00:07:51.958 16:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.958 16:19:25 -- accel/accel.sh@19 -- # IFS=: 00:07:51.958 16:19:25 -- accel/accel.sh@19 -- # read -r var val 00:07:51.958 16:19:25 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:51.958 16:19:25 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:51.958 16:19:25 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:51.958 00:07:51.958 real 0m1.578s 00:07:51.958 user 0m1.349s 00:07:51.958 sys 0m0.128s 00:07:51.958 ************************************ 00:07:51.958 END TEST accel_decomp_mthread 00:07:51.958 ************************************ 00:07:51.958 16:19:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:51.958 16:19:25 -- common/autotest_common.sh@10 -- # set +x 00:07:51.958 16:19:25 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:51.958 16:19:25 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:51.958 16:19:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:51.958 16:19:25 -- common/autotest_common.sh@10 -- # set +x 00:07:51.958 ************************************ 00:07:51.958 START TEST accel_deomp_full_mthread 00:07:51.958 ************************************ 00:07:51.958 16:19:25 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:51.958 16:19:25 -- accel/accel.sh@16 -- # local accel_opc 00:07:51.958 16:19:25 -- accel/accel.sh@17 -- # local accel_module 00:07:51.958 16:19:25 -- accel/accel.sh@19 -- # IFS=: 00:07:51.958 16:19:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:51.958 16:19:25 -- accel/accel.sh@19 -- # read -r var val 00:07:51.958 16:19:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:51.958 16:19:25 -- accel/accel.sh@12 -- # build_accel_config 00:07:51.958 16:19:25 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:51.958 16:19:25 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:51.958 16:19:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.958 16:19:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.958 16:19:25 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:51.958 16:19:25 -- accel/accel.sh@40 -- # local IFS=, 00:07:51.958 16:19:25 -- accel/accel.sh@41 -- # jq -r . 00:07:51.958 [2024-04-17 16:19:25.805320] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:07:51.958 [2024-04-17 16:19:25.805468] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64178 ] 00:07:51.958 [2024-04-17 16:19:25.947336] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.217 [2024-04-17 16:19:26.078990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.217 16:19:26 -- accel/accel.sh@20 -- # val= 00:07:52.217 16:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.217 16:19:26 -- accel/accel.sh@19 -- # IFS=: 00:07:52.217 16:19:26 -- accel/accel.sh@19 -- # read -r var val 00:07:52.217 16:19:26 -- accel/accel.sh@20 -- # val= 00:07:52.217 16:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.217 16:19:26 -- accel/accel.sh@19 -- # IFS=: 00:07:52.217 16:19:26 -- accel/accel.sh@19 -- # read -r var val 00:07:52.217 16:19:26 -- accel/accel.sh@20 -- # val= 00:07:52.217 16:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.217 16:19:26 -- accel/accel.sh@19 -- # IFS=: 00:07:52.217 16:19:26 -- accel/accel.sh@19 -- # read -r var val 00:07:52.217 16:19:26 -- accel/accel.sh@20 -- # val=0x1 00:07:52.217 16:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.217 16:19:26 -- accel/accel.sh@19 -- # IFS=: 00:07:52.217 16:19:26 -- accel/accel.sh@19 -- # read -r var val 00:07:52.217 16:19:26 -- accel/accel.sh@20 -- # val= 00:07:52.217 16:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.217 16:19:26 -- accel/accel.sh@19 -- # IFS=: 00:07:52.217 16:19:26 -- accel/accel.sh@19 -- # read -r var val 00:07:52.217 16:19:26 -- accel/accel.sh@20 -- # val= 00:07:52.217 16:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.217 16:19:26 -- accel/accel.sh@19 -- # IFS=: 00:07:52.217 16:19:26 -- accel/accel.sh@19 -- # read -r var val 00:07:52.217 16:19:26 -- accel/accel.sh@20 -- # val=decompress 00:07:52.217 16:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.217 16:19:26 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:52.217 16:19:26 -- accel/accel.sh@19 -- # IFS=: 00:07:52.217 16:19:26 -- accel/accel.sh@19 -- # read -r var val 00:07:52.217 16:19:26 -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:52.217 16:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.217 16:19:26 -- accel/accel.sh@19 -- # IFS=: 00:07:52.217 16:19:26 -- accel/accel.sh@19 -- # read -r var val 00:07:52.217 16:19:26 -- accel/accel.sh@20 -- # val= 00:07:52.217 16:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.217 16:19:26 -- accel/accel.sh@19 -- # IFS=: 00:07:52.217 16:19:26 -- accel/accel.sh@19 -- # read -r var val 00:07:52.217 16:19:26 -- accel/accel.sh@20 -- # val=software 00:07:52.217 16:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.217 16:19:26 -- accel/accel.sh@22 -- # accel_module=software 00:07:52.217 16:19:26 -- accel/accel.sh@19 -- # IFS=: 00:07:52.217 16:19:26 -- accel/accel.sh@19 -- # read -r var val 00:07:52.217 16:19:26 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:52.217 16:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.217 16:19:26 -- accel/accel.sh@19 -- # IFS=: 00:07:52.217 16:19:26 -- accel/accel.sh@19 -- # read -r var val 00:07:52.217 16:19:26 -- accel/accel.sh@20 -- # val=32 00:07:52.217 16:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.217 16:19:26 -- accel/accel.sh@19 -- # IFS=: 00:07:52.217 16:19:26 -- accel/accel.sh@19 -- # read -r var val 00:07:52.217 16:19:26 -- accel/accel.sh@20 -- # val=32 00:07:52.217 16:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.217 16:19:26 -- accel/accel.sh@19 -- # IFS=: 00:07:52.217 16:19:26 -- accel/accel.sh@19 -- # read -r var val 00:07:52.217 16:19:26 -- accel/accel.sh@20 -- # val=2 00:07:52.217 16:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.217 16:19:26 -- accel/accel.sh@19 -- # IFS=: 00:07:52.217 16:19:26 -- accel/accel.sh@19 -- # read -r var val 00:07:52.217 16:19:26 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:52.217 16:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.217 16:19:26 -- accel/accel.sh@19 -- # IFS=: 00:07:52.217 16:19:26 -- accel/accel.sh@19 -- # read -r var val 00:07:52.217 16:19:26 -- accel/accel.sh@20 -- # val=Yes 00:07:52.217 16:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.217 16:19:26 -- accel/accel.sh@19 -- # IFS=: 00:07:52.217 16:19:26 -- accel/accel.sh@19 -- # read -r var val 00:07:52.217 16:19:26 -- accel/accel.sh@20 -- # val= 00:07:52.217 16:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.217 16:19:26 -- accel/accel.sh@19 -- # IFS=: 00:07:52.217 16:19:26 -- accel/accel.sh@19 -- # read -r var val 00:07:52.217 16:19:26 -- accel/accel.sh@20 -- # val= 00:07:52.217 16:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.217 16:19:26 -- accel/accel.sh@19 -- # IFS=: 00:07:52.217 16:19:26 -- accel/accel.sh@19 -- # read -r var val 00:07:53.593 16:19:27 -- accel/accel.sh@20 -- # val= 00:07:53.593 16:19:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.593 16:19:27 -- accel/accel.sh@19 -- # IFS=: 00:07:53.593 16:19:27 -- accel/accel.sh@19 -- # read -r var val 00:07:53.593 16:19:27 -- accel/accel.sh@20 -- # val= 00:07:53.593 16:19:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.593 16:19:27 -- accel/accel.sh@19 -- # IFS=: 00:07:53.593 16:19:27 -- accel/accel.sh@19 -- # read -r var val 00:07:53.593 16:19:27 -- accel/accel.sh@20 -- # val= 00:07:53.593 16:19:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.593 16:19:27 -- accel/accel.sh@19 -- # IFS=: 00:07:53.593 16:19:27 -- accel/accel.sh@19 -- # read -r var val 00:07:53.593 16:19:27 -- accel/accel.sh@20 -- # val= 00:07:53.593 16:19:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.593 16:19:27 -- accel/accel.sh@19 -- # IFS=: 00:07:53.593 16:19:27 -- accel/accel.sh@19 -- # read -r var val 00:07:53.593 16:19:27 -- accel/accel.sh@20 -- # val= 00:07:53.593 16:19:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.593 16:19:27 -- accel/accel.sh@19 -- # IFS=: 00:07:53.593 16:19:27 -- accel/accel.sh@19 -- # read -r var val 00:07:53.593 16:19:27 -- accel/accel.sh@20 -- # val= 00:07:53.593 16:19:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.593 16:19:27 -- accel/accel.sh@19 -- # IFS=: 00:07:53.593 16:19:27 -- accel/accel.sh@19 -- # read -r var val 00:07:53.593 16:19:27 -- accel/accel.sh@20 -- # val= 00:07:53.593 16:19:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:53.593 16:19:27 -- accel/accel.sh@19 -- # IFS=: 00:07:53.593 16:19:27 -- accel/accel.sh@19 -- # read -r var val 00:07:53.593 16:19:27 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:53.593 16:19:27 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:53.593 16:19:27 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:53.593 00:07:53.593 real 0m1.592s 00:07:53.593 user 0m1.368s 00:07:53.593 sys 0m0.127s 00:07:53.593 16:19:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:53.593 16:19:27 -- common/autotest_common.sh@10 -- # set +x 00:07:53.593 ************************************ 00:07:53.593 END TEST accel_deomp_full_mthread 00:07:53.593 ************************************ 00:07:53.593 16:19:27 -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:53.593 16:19:27 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:53.593 16:19:27 -- accel/accel.sh@137 -- # build_accel_config 00:07:53.593 16:19:27 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:53.593 16:19:27 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:53.593 16:19:27 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:53.593 16:19:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:53.593 16:19:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.593 16:19:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.593 16:19:27 -- common/autotest_common.sh@10 -- # set +x 00:07:53.593 16:19:27 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:53.593 16:19:27 -- accel/accel.sh@40 -- # local IFS=, 00:07:53.593 16:19:27 -- accel/accel.sh@41 -- # jq -r . 00:07:53.593 ************************************ 00:07:53.593 START TEST accel_dif_functional_tests 00:07:53.593 ************************************ 00:07:53.593 16:19:27 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:53.593 [2024-04-17 16:19:27.543308] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:07:53.593 [2024-04-17 16:19:27.543410] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64218 ] 00:07:53.851 [2024-04-17 16:19:27.680887] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:53.851 [2024-04-17 16:19:27.803566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.851 [2024-04-17 16:19:27.803711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.851 [2024-04-17 16:19:27.803716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.110 00:07:54.110 00:07:54.110 CUnit - A unit testing framework for C - Version 2.1-3 00:07:54.110 http://cunit.sourceforge.net/ 00:07:54.110 00:07:54.110 00:07:54.110 Suite: accel_dif 00:07:54.110 Test: verify: DIF generated, GUARD check ...passed 00:07:54.110 Test: verify: DIF generated, APPTAG check ...passed 00:07:54.110 Test: verify: DIF generated, REFTAG check ...passed 00:07:54.110 Test: verify: DIF not generated, GUARD check ...[2024-04-17 16:19:27.900564] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:54.110 [2024-04-17 16:19:27.900815] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:54.110 passed 00:07:54.110 Test: verify: DIF not generated, APPTAG check ...[2024-04-17 16:19:27.901005] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:54.110 [2024-04-17 16:19:27.901281] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:54.110 passed 00:07:54.110 Test: verify: DIF not generated, REFTAG check ...[2024-04-17 16:19:27.901483] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:54.110 [2024-04-17 16:19:27.901601] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5apassed 00:07:54.110 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:54.110 Test: verify: APPTAG incorrect, APPTAG check ...5a 00:07:54.110 [2024-04-17 16:19:27.901878] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:54.110 passed 00:07:54.110 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:54.110 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:54.110 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:54.110 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed[2024-04-17 16:19:27.902360] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:54.110 00:07:54.110 Test: generate copy: DIF generated, GUARD check ...passed 00:07:54.110 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:54.110 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:54.110 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:54.110 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:54.110 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:54.110 Test: generate copy: iovecs-len validate ...[2024-04-17 16:19:27.903455] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:54.110 passed 00:07:54.110 Test: generate copy: buffer alignment validate ...passed 00:07:54.110 00:07:54.110 Run Summary: Type Total Ran Passed Failed Inactive 00:07:54.110 suites 1 1 n/a 0 0 00:07:54.110 tests 20 20 20 0 0 00:07:54.110 asserts 204 204 204 0 n/a 00:07:54.110 00:07:54.110 Elapsed time = 0.008 seconds 00:07:54.369 00:07:54.369 real 0m0.666s 00:07:54.369 user 0m0.843s 00:07:54.369 sys 0m0.146s 00:07:54.369 ************************************ 00:07:54.369 END TEST accel_dif_functional_tests 00:07:54.369 ************************************ 00:07:54.369 16:19:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:54.369 16:19:28 -- common/autotest_common.sh@10 -- # set +x 00:07:54.369 00:07:54.369 real 0m38.119s 00:07:54.369 user 0m38.492s 00:07:54.369 sys 0m4.941s 00:07:54.369 16:19:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:54.369 ************************************ 00:07:54.369 END TEST accel 00:07:54.369 ************************************ 00:07:54.369 16:19:28 -- common/autotest_common.sh@10 -- # set +x 00:07:54.369 16:19:28 -- spdk/autotest.sh@179 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:54.369 16:19:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:54.369 16:19:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:54.369 16:19:28 -- common/autotest_common.sh@10 -- # set +x 00:07:54.369 ************************************ 00:07:54.369 START TEST accel_rpc 00:07:54.369 ************************************ 00:07:54.369 16:19:28 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:54.369 * Looking for test storage... 00:07:54.369 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:54.369 16:19:28 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:54.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.369 16:19:28 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=64293 00:07:54.369 16:19:28 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:54.369 16:19:28 -- accel/accel_rpc.sh@15 -- # waitforlisten 64293 00:07:54.369 16:19:28 -- common/autotest_common.sh@817 -- # '[' -z 64293 ']' 00:07:54.369 16:19:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.369 16:19:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:54.369 16:19:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.369 16:19:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:54.369 16:19:28 -- common/autotest_common.sh@10 -- # set +x 00:07:54.628 [2024-04-17 16:19:28.434307] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:07:54.628 [2024-04-17 16:19:28.434408] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64293 ] 00:07:54.628 [2024-04-17 16:19:28.572228] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.886 [2024-04-17 16:19:28.706723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.453 16:19:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:55.453 16:19:29 -- common/autotest_common.sh@850 -- # return 0 00:07:55.453 16:19:29 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:55.453 16:19:29 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:55.453 16:19:29 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:55.453 16:19:29 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:55.453 16:19:29 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:55.453 16:19:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:55.453 16:19:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:55.453 16:19:29 -- common/autotest_common.sh@10 -- # set +x 00:07:55.453 ************************************ 00:07:55.453 START TEST accel_assign_opcode 00:07:55.453 ************************************ 00:07:55.453 16:19:29 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:07:55.453 16:19:29 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:55.453 16:19:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.453 16:19:29 -- common/autotest_common.sh@10 -- # set +x 00:07:55.453 [2024-04-17 16:19:29.491688] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:55.711 16:19:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.711 16:19:29 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:55.711 16:19:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.711 16:19:29 -- common/autotest_common.sh@10 -- # set +x 00:07:55.711 [2024-04-17 16:19:29.499691] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:55.711 16:19:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.711 16:19:29 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:55.711 16:19:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.711 16:19:29 -- common/autotest_common.sh@10 -- # set +x 00:07:55.711 16:19:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.711 16:19:29 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:55.711 16:19:29 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:55.711 16:19:29 -- accel/accel_rpc.sh@42 -- # grep software 00:07:55.711 16:19:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.711 16:19:29 -- common/autotest_common.sh@10 -- # set +x 00:07:55.711 16:19:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.969 software 00:07:55.969 ************************************ 00:07:55.969 END TEST accel_assign_opcode 00:07:55.969 ************************************ 00:07:55.969 00:07:55.969 real 0m0.304s 00:07:55.969 user 0m0.058s 00:07:55.969 sys 0m0.011s 00:07:55.969 16:19:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:55.969 16:19:29 -- common/autotest_common.sh@10 -- # set +x 00:07:55.969 16:19:29 -- accel/accel_rpc.sh@55 -- # killprocess 64293 00:07:55.969 16:19:29 -- common/autotest_common.sh@936 -- # '[' -z 64293 ']' 00:07:55.969 16:19:29 -- common/autotest_common.sh@940 -- # kill -0 64293 00:07:55.969 16:19:29 -- common/autotest_common.sh@941 -- # uname 00:07:55.969 16:19:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:55.969 16:19:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64293 00:07:55.969 killing process with pid 64293 00:07:55.969 16:19:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:55.969 16:19:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:55.969 16:19:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64293' 00:07:55.969 16:19:29 -- common/autotest_common.sh@955 -- # kill 64293 00:07:55.969 16:19:29 -- common/autotest_common.sh@960 -- # wait 64293 00:07:56.535 00:07:56.535 real 0m1.990s 00:07:56.535 user 0m2.116s 00:07:56.535 sys 0m0.466s 00:07:56.535 16:19:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:56.535 ************************************ 00:07:56.535 END TEST accel_rpc 00:07:56.535 ************************************ 00:07:56.535 16:19:30 -- common/autotest_common.sh@10 -- # set +x 00:07:56.535 16:19:30 -- spdk/autotest.sh@180 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:56.535 16:19:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:56.535 16:19:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:56.535 16:19:30 -- common/autotest_common.sh@10 -- # set +x 00:07:56.535 ************************************ 00:07:56.535 START TEST app_cmdline 00:07:56.535 ************************************ 00:07:56.535 16:19:30 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:56.535 * Looking for test storage... 00:07:56.535 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:56.535 16:19:30 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:56.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.535 16:19:30 -- app/cmdline.sh@17 -- # spdk_tgt_pid=64413 00:07:56.535 16:19:30 -- app/cmdline.sh@18 -- # waitforlisten 64413 00:07:56.535 16:19:30 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:56.535 16:19:30 -- common/autotest_common.sh@817 -- # '[' -z 64413 ']' 00:07:56.535 16:19:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.535 16:19:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:56.535 16:19:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.535 16:19:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:56.535 16:19:30 -- common/autotest_common.sh@10 -- # set +x 00:07:56.535 [2024-04-17 16:19:30.562940] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:07:56.535 [2024-04-17 16:19:30.563056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64413 ] 00:07:56.793 [2024-04-17 16:19:30.702204] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.793 [2024-04-17 16:19:30.836601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.727 16:19:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:57.727 16:19:31 -- common/autotest_common.sh@850 -- # return 0 00:07:57.727 16:19:31 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:57.985 { 00:07:57.985 "fields": { 00:07:57.985 "commit": "74bc86fe4", 00:07:57.985 "major": 24, 00:07:57.985 "minor": 5, 00:07:57.985 "patch": 0, 00:07:57.985 "suffix": "-pre" 00:07:57.985 }, 00:07:57.985 "version": "SPDK v24.05-pre git sha1 74bc86fe4" 00:07:57.985 } 00:07:57.985 16:19:31 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:57.985 16:19:31 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:57.985 16:19:31 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:57.985 16:19:31 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:57.985 16:19:31 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:57.985 16:19:31 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:57.985 16:19:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:57.985 16:19:31 -- app/cmdline.sh@26 -- # sort 00:07:57.985 16:19:31 -- common/autotest_common.sh@10 -- # set +x 00:07:57.985 16:19:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:57.985 16:19:31 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:57.985 16:19:31 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:57.985 16:19:31 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:57.985 16:19:31 -- common/autotest_common.sh@638 -- # local es=0 00:07:57.985 16:19:31 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:57.985 16:19:31 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:57.985 16:19:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:57.985 16:19:31 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:57.985 16:19:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:57.985 16:19:31 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:57.985 16:19:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:57.985 16:19:31 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:57.985 16:19:31 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:57.985 16:19:31 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:58.244 2024/04/17 16:19:32 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:07:58.244 request: 00:07:58.244 { 00:07:58.244 "method": "env_dpdk_get_mem_stats", 00:07:58.244 "params": {} 00:07:58.244 } 00:07:58.244 Got JSON-RPC error response 00:07:58.244 GoRPCClient: error on JSON-RPC call 00:07:58.244 16:19:32 -- common/autotest_common.sh@641 -- # es=1 00:07:58.244 16:19:32 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:58.244 16:19:32 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:58.244 16:19:32 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:58.244 16:19:32 -- app/cmdline.sh@1 -- # killprocess 64413 00:07:58.244 16:19:32 -- common/autotest_common.sh@936 -- # '[' -z 64413 ']' 00:07:58.244 16:19:32 -- common/autotest_common.sh@940 -- # kill -0 64413 00:07:58.244 16:19:32 -- common/autotest_common.sh@941 -- # uname 00:07:58.244 16:19:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:58.244 16:19:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64413 00:07:58.244 16:19:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:58.244 16:19:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:58.244 16:19:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64413' 00:07:58.244 killing process with pid 64413 00:07:58.244 16:19:32 -- common/autotest_common.sh@955 -- # kill 64413 00:07:58.244 16:19:32 -- common/autotest_common.sh@960 -- # wait 64413 00:07:58.810 00:07:58.810 real 0m2.236s 00:07:58.810 user 0m2.794s 00:07:58.810 sys 0m0.506s 00:07:58.810 16:19:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:58.810 ************************************ 00:07:58.810 END TEST app_cmdline 00:07:58.810 ************************************ 00:07:58.810 16:19:32 -- common/autotest_common.sh@10 -- # set +x 00:07:58.810 16:19:32 -- spdk/autotest.sh@181 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:58.810 16:19:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:58.810 16:19:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:58.810 16:19:32 -- common/autotest_common.sh@10 -- # set +x 00:07:58.810 ************************************ 00:07:58.810 START TEST version 00:07:58.810 ************************************ 00:07:58.810 16:19:32 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:58.810 * Looking for test storage... 00:07:58.810 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:58.810 16:19:32 -- app/version.sh@17 -- # get_header_version major 00:07:59.070 16:19:32 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:59.070 16:19:32 -- app/version.sh@14 -- # cut -f2 00:07:59.070 16:19:32 -- app/version.sh@14 -- # tr -d '"' 00:07:59.070 16:19:32 -- app/version.sh@17 -- # major=24 00:07:59.070 16:19:32 -- app/version.sh@18 -- # get_header_version minor 00:07:59.070 16:19:32 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:59.070 16:19:32 -- app/version.sh@14 -- # cut -f2 00:07:59.070 16:19:32 -- app/version.sh@14 -- # tr -d '"' 00:07:59.070 16:19:32 -- app/version.sh@18 -- # minor=5 00:07:59.070 16:19:32 -- app/version.sh@19 -- # get_header_version patch 00:07:59.070 16:19:32 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:59.070 16:19:32 -- app/version.sh@14 -- # cut -f2 00:07:59.070 16:19:32 -- app/version.sh@14 -- # tr -d '"' 00:07:59.070 16:19:32 -- app/version.sh@19 -- # patch=0 00:07:59.070 16:19:32 -- app/version.sh@20 -- # get_header_version suffix 00:07:59.070 16:19:32 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:59.070 16:19:32 -- app/version.sh@14 -- # cut -f2 00:07:59.070 16:19:32 -- app/version.sh@14 -- # tr -d '"' 00:07:59.070 16:19:32 -- app/version.sh@20 -- # suffix=-pre 00:07:59.070 16:19:32 -- app/version.sh@22 -- # version=24.5 00:07:59.070 16:19:32 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:59.070 16:19:32 -- app/version.sh@28 -- # version=24.5rc0 00:07:59.070 16:19:32 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:59.070 16:19:32 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:59.070 16:19:32 -- app/version.sh@30 -- # py_version=24.5rc0 00:07:59.070 16:19:32 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:07:59.070 00:07:59.070 real 0m0.154s 00:07:59.070 user 0m0.079s 00:07:59.070 sys 0m0.104s 00:07:59.070 16:19:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:59.070 16:19:32 -- common/autotest_common.sh@10 -- # set +x 00:07:59.070 ************************************ 00:07:59.070 END TEST version 00:07:59.070 ************************************ 00:07:59.070 16:19:32 -- spdk/autotest.sh@183 -- # '[' 0 -eq 1 ']' 00:07:59.070 16:19:32 -- spdk/autotest.sh@193 -- # uname -s 00:07:59.070 16:19:32 -- spdk/autotest.sh@193 -- # [[ Linux == Linux ]] 00:07:59.070 16:19:32 -- spdk/autotest.sh@194 -- # [[ 0 -eq 1 ]] 00:07:59.070 16:19:32 -- spdk/autotest.sh@194 -- # [[ 0 -eq 1 ]] 00:07:59.070 16:19:32 -- spdk/autotest.sh@206 -- # '[' 0 -eq 1 ']' 00:07:59.070 16:19:32 -- spdk/autotest.sh@253 -- # '[' 0 -eq 1 ']' 00:07:59.070 16:19:32 -- spdk/autotest.sh@257 -- # timing_exit lib 00:07:59.070 16:19:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:59.070 16:19:32 -- common/autotest_common.sh@10 -- # set +x 00:07:59.070 16:19:33 -- spdk/autotest.sh@259 -- # '[' 0 -eq 1 ']' 00:07:59.070 16:19:33 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:59.070 16:19:33 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:59.070 16:19:33 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:59.070 16:19:33 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:07:59.070 16:19:33 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:07:59.070 16:19:33 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:59.070 16:19:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:59.070 16:19:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:59.070 16:19:33 -- common/autotest_common.sh@10 -- # set +x 00:07:59.070 ************************************ 00:07:59.070 START TEST nvmf_tcp 00:07:59.070 ************************************ 00:07:59.070 16:19:33 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:59.332 * Looking for test storage... 00:07:59.332 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:59.332 16:19:33 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:59.332 16:19:33 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:59.332 16:19:33 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:59.332 16:19:33 -- nvmf/common.sh@7 -- # uname -s 00:07:59.332 16:19:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:59.332 16:19:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:59.332 16:19:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:59.332 16:19:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:59.332 16:19:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:59.332 16:19:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:59.332 16:19:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:59.332 16:19:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:59.332 16:19:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:59.332 16:19:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:59.332 16:19:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:07:59.332 16:19:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:07:59.332 16:19:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:59.332 16:19:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:59.332 16:19:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:59.332 16:19:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:59.332 16:19:33 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:59.332 16:19:33 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.332 16:19:33 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.332 16:19:33 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.332 16:19:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.332 16:19:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.332 16:19:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.332 16:19:33 -- paths/export.sh@5 -- # export PATH 00:07:59.332 16:19:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.332 16:19:33 -- nvmf/common.sh@47 -- # : 0 00:07:59.332 16:19:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:59.332 16:19:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:59.332 16:19:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:59.332 16:19:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:59.332 16:19:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:59.332 16:19:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:59.332 16:19:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:59.332 16:19:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:59.333 16:19:33 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:59.333 16:19:33 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:59.333 16:19:33 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:59.333 16:19:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:59.333 16:19:33 -- common/autotest_common.sh@10 -- # set +x 00:07:59.333 16:19:33 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:59.333 16:19:33 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:59.333 16:19:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:59.333 16:19:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:59.333 16:19:33 -- common/autotest_common.sh@10 -- # set +x 00:07:59.333 ************************************ 00:07:59.333 START TEST nvmf_example 00:07:59.333 ************************************ 00:07:59.333 16:19:33 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:59.333 * Looking for test storage... 00:07:59.333 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:59.333 16:19:33 -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:59.333 16:19:33 -- nvmf/common.sh@7 -- # uname -s 00:07:59.333 16:19:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:59.333 16:19:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:59.333 16:19:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:59.333 16:19:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:59.333 16:19:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:59.333 16:19:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:59.333 16:19:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:59.333 16:19:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:59.333 16:19:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:59.333 16:19:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:59.333 16:19:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:07:59.333 16:19:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:07:59.333 16:19:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:59.333 16:19:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:59.333 16:19:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:59.333 16:19:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:59.333 16:19:33 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:59.333 16:19:33 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.333 16:19:33 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.333 16:19:33 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.333 16:19:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.333 16:19:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.333 16:19:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.333 16:19:33 -- paths/export.sh@5 -- # export PATH 00:07:59.333 16:19:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.333 16:19:33 -- nvmf/common.sh@47 -- # : 0 00:07:59.333 16:19:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:59.333 16:19:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:59.333 16:19:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:59.333 16:19:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:59.333 16:19:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:59.333 16:19:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:59.333 16:19:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:59.333 16:19:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:59.591 16:19:33 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:59.591 16:19:33 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:59.591 16:19:33 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:59.591 16:19:33 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:59.591 16:19:33 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:59.591 16:19:33 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:59.591 16:19:33 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:59.591 16:19:33 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:59.591 16:19:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:59.591 16:19:33 -- common/autotest_common.sh@10 -- # set +x 00:07:59.591 16:19:33 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:59.591 16:19:33 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:59.591 16:19:33 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:59.591 16:19:33 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:59.591 16:19:33 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:59.591 16:19:33 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:59.591 16:19:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.591 16:19:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:59.591 16:19:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.591 16:19:33 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:07:59.591 16:19:33 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:07:59.591 16:19:33 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:07:59.591 16:19:33 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:07:59.591 16:19:33 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:07:59.591 16:19:33 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:07:59.591 16:19:33 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:59.591 16:19:33 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:59.591 16:19:33 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:59.591 16:19:33 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:59.591 16:19:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:59.591 16:19:33 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:59.591 16:19:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:59.591 16:19:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:59.591 16:19:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:59.591 16:19:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:59.591 16:19:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:59.591 16:19:33 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:59.591 16:19:33 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:59.591 Cannot find device "nvmf_init_br" 00:07:59.591 16:19:33 -- nvmf/common.sh@154 -- # true 00:07:59.591 16:19:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:59.591 Cannot find device "nvmf_tgt_br" 00:07:59.591 16:19:33 -- nvmf/common.sh@155 -- # true 00:07:59.591 16:19:33 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:59.591 Cannot find device "nvmf_tgt_br2" 00:07:59.591 16:19:33 -- nvmf/common.sh@156 -- # true 00:07:59.591 16:19:33 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:59.591 Cannot find device "nvmf_init_br" 00:07:59.591 16:19:33 -- nvmf/common.sh@157 -- # true 00:07:59.591 16:19:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:59.591 Cannot find device "nvmf_tgt_br" 00:07:59.591 16:19:33 -- nvmf/common.sh@158 -- # true 00:07:59.591 16:19:33 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:59.591 Cannot find device "nvmf_tgt_br2" 00:07:59.591 16:19:33 -- nvmf/common.sh@159 -- # true 00:07:59.591 16:19:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:59.591 Cannot find device "nvmf_br" 00:07:59.591 16:19:33 -- nvmf/common.sh@160 -- # true 00:07:59.591 16:19:33 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:59.591 Cannot find device "nvmf_init_if" 00:07:59.591 16:19:33 -- nvmf/common.sh@161 -- # true 00:07:59.591 16:19:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:59.591 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:59.591 16:19:33 -- nvmf/common.sh@162 -- # true 00:07:59.591 16:19:33 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:59.591 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:59.591 16:19:33 -- nvmf/common.sh@163 -- # true 00:07:59.591 16:19:33 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:59.591 16:19:33 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:59.592 16:19:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:59.592 16:19:33 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:59.592 16:19:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:59.592 16:19:33 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:59.592 16:19:33 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:59.592 16:19:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:59.592 16:19:33 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:59.592 16:19:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:59.592 16:19:33 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:59.592 16:19:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:59.592 16:19:33 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:59.592 16:19:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:59.592 16:19:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:59.592 16:19:33 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:59.592 16:19:33 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:59.850 16:19:33 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:59.850 16:19:33 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:59.850 16:19:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:59.850 16:19:33 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:59.850 16:19:33 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:59.850 16:19:33 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:59.850 16:19:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:59.850 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:59.850 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:07:59.850 00:07:59.850 --- 10.0.0.2 ping statistics --- 00:07:59.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.850 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:07:59.850 16:19:33 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:59.850 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:59.850 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:07:59.850 00:07:59.850 --- 10.0.0.3 ping statistics --- 00:07:59.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.850 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:07:59.850 16:19:33 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:59.850 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:59.850 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:07:59.850 00:07:59.850 --- 10.0.0.1 ping statistics --- 00:07:59.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.850 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:07:59.850 16:19:33 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:59.850 16:19:33 -- nvmf/common.sh@422 -- # return 0 00:07:59.850 16:19:33 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:59.850 16:19:33 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:59.850 16:19:33 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:59.850 16:19:33 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:59.850 16:19:33 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:59.850 16:19:33 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:59.850 16:19:33 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:59.850 16:19:33 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:59.850 16:19:33 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:59.850 16:19:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:59.850 16:19:33 -- common/autotest_common.sh@10 -- # set +x 00:07:59.850 16:19:33 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:59.850 16:19:33 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:59.850 16:19:33 -- target/nvmf_example.sh@34 -- # nvmfpid=64793 00:07:59.850 16:19:33 -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:59.850 16:19:33 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:59.850 16:19:33 -- target/nvmf_example.sh@36 -- # waitforlisten 64793 00:07:59.850 16:19:33 -- common/autotest_common.sh@817 -- # '[' -z 64793 ']' 00:07:59.850 16:19:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.850 16:19:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:59.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.850 16:19:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.850 16:19:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:59.850 16:19:33 -- common/autotest_common.sh@10 -- # set +x 00:08:01.219 16:19:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:01.219 16:19:34 -- common/autotest_common.sh@850 -- # return 0 00:08:01.219 16:19:34 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:01.219 16:19:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:01.219 16:19:34 -- common/autotest_common.sh@10 -- # set +x 00:08:01.219 16:19:34 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:01.219 16:19:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:01.219 16:19:34 -- common/autotest_common.sh@10 -- # set +x 00:08:01.219 16:19:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:01.219 16:19:34 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:01.219 16:19:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:01.219 16:19:34 -- common/autotest_common.sh@10 -- # set +x 00:08:01.219 16:19:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:01.219 16:19:34 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:01.219 16:19:34 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:01.219 16:19:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:01.219 16:19:34 -- common/autotest_common.sh@10 -- # set +x 00:08:01.219 16:19:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:01.219 16:19:35 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:01.219 16:19:35 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:01.219 16:19:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:01.219 16:19:35 -- common/autotest_common.sh@10 -- # set +x 00:08:01.219 16:19:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:01.219 16:19:35 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:01.219 16:19:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:01.219 16:19:35 -- common/autotest_common.sh@10 -- # set +x 00:08:01.219 16:19:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:01.219 16:19:35 -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:08:01.219 16:19:35 -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:13.419 Initializing NVMe Controllers 00:08:13.419 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:13.419 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:13.419 Initialization complete. Launching workers. 00:08:13.419 ======================================================== 00:08:13.419 Latency(us) 00:08:13.419 Device Information : IOPS MiB/s Average min max 00:08:13.419 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14701.76 57.43 4353.13 806.41 22584.78 00:08:13.419 ======================================================== 00:08:13.419 Total : 14701.76 57.43 4353.13 806.41 22584.78 00:08:13.419 00:08:13.419 16:19:45 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:13.419 16:19:45 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:13.419 16:19:45 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:13.419 16:19:45 -- nvmf/common.sh@117 -- # sync 00:08:13.419 16:19:45 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:13.419 16:19:45 -- nvmf/common.sh@120 -- # set +e 00:08:13.419 16:19:45 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:13.419 16:19:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:13.419 rmmod nvme_tcp 00:08:13.419 rmmod nvme_fabrics 00:08:13.419 rmmod nvme_keyring 00:08:13.419 16:19:45 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:13.419 16:19:45 -- nvmf/common.sh@124 -- # set -e 00:08:13.419 16:19:45 -- nvmf/common.sh@125 -- # return 0 00:08:13.419 16:19:45 -- nvmf/common.sh@478 -- # '[' -n 64793 ']' 00:08:13.419 16:19:45 -- nvmf/common.sh@479 -- # killprocess 64793 00:08:13.419 16:19:45 -- common/autotest_common.sh@936 -- # '[' -z 64793 ']' 00:08:13.419 16:19:45 -- common/autotest_common.sh@940 -- # kill -0 64793 00:08:13.419 16:19:45 -- common/autotest_common.sh@941 -- # uname 00:08:13.419 16:19:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:13.419 16:19:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64793 00:08:13.419 16:19:45 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:08:13.419 16:19:45 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:08:13.419 killing process with pid 64793 00:08:13.419 16:19:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64793' 00:08:13.419 16:19:45 -- common/autotest_common.sh@955 -- # kill 64793 00:08:13.419 16:19:45 -- common/autotest_common.sh@960 -- # wait 64793 00:08:13.419 nvmf threads initialize successfully 00:08:13.419 bdev subsystem init successfully 00:08:13.419 created a nvmf target service 00:08:13.419 create targets's poll groups done 00:08:13.419 all subsystems of target started 00:08:13.419 nvmf target is running 00:08:13.419 all subsystems of target stopped 00:08:13.419 destroy targets's poll groups done 00:08:13.419 destroyed the nvmf target service 00:08:13.419 bdev subsystem finish successfully 00:08:13.419 nvmf threads destroy successfully 00:08:13.419 16:19:45 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:13.419 16:19:45 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:13.419 16:19:45 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:13.419 16:19:45 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:13.419 16:19:45 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:13.419 16:19:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.419 16:19:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:13.419 16:19:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.419 16:19:45 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:13.419 16:19:45 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:13.419 16:19:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:13.419 16:19:45 -- common/autotest_common.sh@10 -- # set +x 00:08:13.419 00:08:13.419 real 0m12.433s 00:08:13.419 user 0m44.434s 00:08:13.419 sys 0m2.015s 00:08:13.419 16:19:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:13.419 16:19:45 -- common/autotest_common.sh@10 -- # set +x 00:08:13.419 ************************************ 00:08:13.419 END TEST nvmf_example 00:08:13.419 ************************************ 00:08:13.419 16:19:45 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:13.419 16:19:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:13.419 16:19:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:13.419 16:19:45 -- common/autotest_common.sh@10 -- # set +x 00:08:13.419 ************************************ 00:08:13.419 START TEST nvmf_filesystem 00:08:13.419 ************************************ 00:08:13.419 16:19:45 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:13.419 * Looking for test storage... 00:08:13.419 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:13.419 16:19:45 -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:08:13.419 16:19:45 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:13.419 16:19:45 -- common/autotest_common.sh@34 -- # set -e 00:08:13.419 16:19:45 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:13.419 16:19:45 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:13.419 16:19:45 -- common/autotest_common.sh@38 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:08:13.419 16:19:45 -- common/autotest_common.sh@43 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:08:13.419 16:19:45 -- common/autotest_common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:08:13.419 16:19:45 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:13.419 16:19:45 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:13.419 16:19:45 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:13.419 16:19:45 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:13.419 16:19:45 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:08:13.419 16:19:45 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:13.419 16:19:45 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:13.419 16:19:45 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:13.419 16:19:45 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:13.419 16:19:45 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:13.419 16:19:45 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:13.419 16:19:45 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:13.419 16:19:45 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:13.419 16:19:45 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:13.419 16:19:45 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:13.419 16:19:45 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:13.420 16:19:45 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:13.420 16:19:45 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:13.420 16:19:45 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:13.420 16:19:45 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:13.420 16:19:45 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:13.420 16:19:45 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:13.420 16:19:45 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:13.420 16:19:45 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:13.420 16:19:45 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:13.420 16:19:45 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:13.420 16:19:45 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:13.420 16:19:45 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:13.420 16:19:45 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:13.420 16:19:45 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:13.420 16:19:45 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:13.420 16:19:45 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:13.420 16:19:45 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:13.420 16:19:45 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:13.420 16:19:45 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:13.420 16:19:45 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:08:13.420 16:19:45 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:13.420 16:19:45 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:13.420 16:19:45 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:13.420 16:19:45 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:13.420 16:19:45 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:13.420 16:19:45 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:13.420 16:19:45 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:13.420 16:19:45 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:13.420 16:19:45 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:13.420 16:19:45 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:13.420 16:19:45 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:13.420 16:19:45 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:13.420 16:19:45 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:13.420 16:19:45 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:13.420 16:19:45 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:08:13.420 16:19:45 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:13.420 16:19:45 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:08:13.420 16:19:45 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:08:13.420 16:19:45 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:08:13.420 16:19:45 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:08:13.420 16:19:45 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:08:13.420 16:19:45 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:08:13.420 16:19:45 -- common/build_config.sh@59 -- # CONFIG_GOLANG=y 00:08:13.420 16:19:45 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:08:13.420 16:19:45 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:08:13.420 16:19:45 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:08:13.420 16:19:45 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:08:13.420 16:19:45 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:08:13.420 16:19:45 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:08:13.420 16:19:45 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:08:13.420 16:19:45 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:08:13.420 16:19:45 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:13.420 16:19:45 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:08:13.420 16:19:45 -- common/build_config.sh@70 -- # CONFIG_AVAHI=y 00:08:13.420 16:19:45 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:08:13.420 16:19:45 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:08:13.420 16:19:45 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:08:13.420 16:19:45 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:08:13.420 16:19:45 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:08:13.420 16:19:45 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:08:13.420 16:19:45 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:08:13.420 16:19:45 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:08:13.420 16:19:45 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:08:13.420 16:19:45 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:13.420 16:19:45 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:08:13.420 16:19:45 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:08:13.420 16:19:45 -- common/autotest_common.sh@53 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:08:13.420 16:19:45 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:08:13.420 16:19:45 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:08:13.420 16:19:45 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:08:13.420 16:19:45 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:08:13.420 16:19:45 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:08:13.420 16:19:45 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:08:13.420 16:19:45 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:08:13.420 16:19:45 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:13.420 16:19:45 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:13.420 16:19:45 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:13.420 16:19:45 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:13.420 16:19:45 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:13.420 16:19:45 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:13.420 16:19:45 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:08:13.420 16:19:45 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:13.420 #define SPDK_CONFIG_H 00:08:13.420 #define SPDK_CONFIG_APPS 1 00:08:13.420 #define SPDK_CONFIG_ARCH native 00:08:13.420 #undef SPDK_CONFIG_ASAN 00:08:13.420 #define SPDK_CONFIG_AVAHI 1 00:08:13.420 #undef SPDK_CONFIG_CET 00:08:13.420 #define SPDK_CONFIG_COVERAGE 1 00:08:13.420 #define SPDK_CONFIG_CROSS_PREFIX 00:08:13.420 #undef SPDK_CONFIG_CRYPTO 00:08:13.420 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:13.420 #undef SPDK_CONFIG_CUSTOMOCF 00:08:13.420 #undef SPDK_CONFIG_DAOS 00:08:13.420 #define SPDK_CONFIG_DAOS_DIR 00:08:13.420 #define SPDK_CONFIG_DEBUG 1 00:08:13.420 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:13.420 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:08:13.420 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:13.420 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:13.420 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:13.420 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:13.420 #define SPDK_CONFIG_EXAMPLES 1 00:08:13.420 #undef SPDK_CONFIG_FC 00:08:13.420 #define SPDK_CONFIG_FC_PATH 00:08:13.420 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:13.420 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:13.420 #undef SPDK_CONFIG_FUSE 00:08:13.420 #undef SPDK_CONFIG_FUZZER 00:08:13.420 #define SPDK_CONFIG_FUZZER_LIB 00:08:13.420 #define SPDK_CONFIG_GOLANG 1 00:08:13.420 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:13.420 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:13.420 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:13.420 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:08:13.420 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:13.420 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:13.420 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:13.420 #define SPDK_CONFIG_IDXD 1 00:08:13.420 #undef SPDK_CONFIG_IDXD_KERNEL 00:08:13.420 #undef SPDK_CONFIG_IPSEC_MB 00:08:13.420 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:13.420 #define SPDK_CONFIG_ISAL 1 00:08:13.420 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:13.420 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:13.420 #define SPDK_CONFIG_LIBDIR 00:08:13.420 #undef SPDK_CONFIG_LTO 00:08:13.420 #define SPDK_CONFIG_MAX_LCORES 00:08:13.420 #define SPDK_CONFIG_NVME_CUSE 1 00:08:13.420 #undef SPDK_CONFIG_OCF 00:08:13.420 #define SPDK_CONFIG_OCF_PATH 00:08:13.420 #define SPDK_CONFIG_OPENSSL_PATH 00:08:13.420 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:13.420 #define SPDK_CONFIG_PGO_DIR 00:08:13.420 #undef SPDK_CONFIG_PGO_USE 00:08:13.420 #define SPDK_CONFIG_PREFIX /usr/local 00:08:13.420 #undef SPDK_CONFIG_RAID5F 00:08:13.420 #undef SPDK_CONFIG_RBD 00:08:13.420 #define SPDK_CONFIG_RDMA 1 00:08:13.420 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:13.420 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:13.420 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:13.420 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:13.420 #define SPDK_CONFIG_SHARED 1 00:08:13.420 #undef SPDK_CONFIG_SMA 00:08:13.420 #define SPDK_CONFIG_TESTS 1 00:08:13.420 #undef SPDK_CONFIG_TSAN 00:08:13.420 #define SPDK_CONFIG_UBLK 1 00:08:13.420 #define SPDK_CONFIG_UBSAN 1 00:08:13.420 #undef SPDK_CONFIG_UNIT_TESTS 00:08:13.420 #undef SPDK_CONFIG_URING 00:08:13.420 #define SPDK_CONFIG_URING_PATH 00:08:13.420 #undef SPDK_CONFIG_URING_ZNS 00:08:13.420 #define SPDK_CONFIG_USDT 1 00:08:13.420 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:13.420 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:13.420 #undef SPDK_CONFIG_VFIO_USER 00:08:13.420 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:13.420 #define SPDK_CONFIG_VHOST 1 00:08:13.420 #define SPDK_CONFIG_VIRTIO 1 00:08:13.420 #undef SPDK_CONFIG_VTUNE 00:08:13.420 #define SPDK_CONFIG_VTUNE_DIR 00:08:13.420 #define SPDK_CONFIG_WERROR 1 00:08:13.420 #define SPDK_CONFIG_WPDK_DIR 00:08:13.420 #undef SPDK_CONFIG_XNVME 00:08:13.420 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:13.420 16:19:45 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:13.420 16:19:45 -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:13.420 16:19:45 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:13.420 16:19:45 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:13.420 16:19:45 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:13.421 16:19:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.421 16:19:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.421 16:19:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.421 16:19:45 -- paths/export.sh@5 -- # export PATH 00:08:13.421 16:19:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.421 16:19:45 -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:08:13.421 16:19:45 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:08:13.421 16:19:45 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:08:13.421 16:19:45 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:08:13.421 16:19:45 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:08:13.421 16:19:45 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:08:13.421 16:19:45 -- pm/common@67 -- # TEST_TAG=N/A 00:08:13.421 16:19:45 -- pm/common@68 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:08:13.421 16:19:45 -- pm/common@70 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:08:13.421 16:19:45 -- pm/common@71 -- # uname -s 00:08:13.421 16:19:45 -- pm/common@71 -- # PM_OS=Linux 00:08:13.421 16:19:45 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:13.421 16:19:45 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:08:13.421 16:19:45 -- pm/common@76 -- # [[ Linux == Linux ]] 00:08:13.421 16:19:45 -- pm/common@76 -- # [[ QEMU != QEMU ]] 00:08:13.421 16:19:45 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:08:13.421 16:19:45 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:08:13.421 16:19:45 -- pm/common@85 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:08:13.421 16:19:45 -- common/autotest_common.sh@57 -- # : 0 00:08:13.421 16:19:45 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:08:13.421 16:19:45 -- common/autotest_common.sh@61 -- # : 0 00:08:13.421 16:19:45 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:13.421 16:19:45 -- common/autotest_common.sh@63 -- # : 0 00:08:13.421 16:19:45 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:08:13.421 16:19:45 -- common/autotest_common.sh@65 -- # : 1 00:08:13.421 16:19:45 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:13.421 16:19:45 -- common/autotest_common.sh@67 -- # : 0 00:08:13.421 16:19:45 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:08:13.421 16:19:45 -- common/autotest_common.sh@69 -- # : 00:08:13.421 16:19:45 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:08:13.421 16:19:45 -- common/autotest_common.sh@71 -- # : 0 00:08:13.421 16:19:45 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:08:13.421 16:19:45 -- common/autotest_common.sh@73 -- # : 0 00:08:13.421 16:19:45 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:08:13.421 16:19:45 -- common/autotest_common.sh@75 -- # : 0 00:08:13.421 16:19:45 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:08:13.421 16:19:45 -- common/autotest_common.sh@77 -- # : 0 00:08:13.421 16:19:45 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:13.421 16:19:45 -- common/autotest_common.sh@79 -- # : 0 00:08:13.421 16:19:45 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:08:13.421 16:19:45 -- common/autotest_common.sh@81 -- # : 0 00:08:13.421 16:19:45 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:08:13.421 16:19:45 -- common/autotest_common.sh@83 -- # : 0 00:08:13.421 16:19:45 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:08:13.421 16:19:45 -- common/autotest_common.sh@85 -- # : 0 00:08:13.421 16:19:45 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:08:13.421 16:19:45 -- common/autotest_common.sh@87 -- # : 0 00:08:13.421 16:19:45 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:08:13.421 16:19:45 -- common/autotest_common.sh@89 -- # : 0 00:08:13.421 16:19:45 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:08:13.421 16:19:45 -- common/autotest_common.sh@91 -- # : 1 00:08:13.421 16:19:45 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:08:13.421 16:19:45 -- common/autotest_common.sh@93 -- # : 0 00:08:13.421 16:19:45 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:08:13.421 16:19:45 -- common/autotest_common.sh@95 -- # : 0 00:08:13.421 16:19:45 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:13.421 16:19:45 -- common/autotest_common.sh@97 -- # : 0 00:08:13.421 16:19:45 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:08:13.421 16:19:45 -- common/autotest_common.sh@99 -- # : 0 00:08:13.421 16:19:45 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:08:13.421 16:19:45 -- common/autotest_common.sh@101 -- # : tcp 00:08:13.421 16:19:45 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:13.421 16:19:45 -- common/autotest_common.sh@103 -- # : 0 00:08:13.421 16:19:45 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:08:13.421 16:19:45 -- common/autotest_common.sh@105 -- # : 0 00:08:13.421 16:19:45 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:08:13.421 16:19:45 -- common/autotest_common.sh@107 -- # : 0 00:08:13.421 16:19:45 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:08:13.421 16:19:45 -- common/autotest_common.sh@109 -- # : 0 00:08:13.421 16:19:45 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:08:13.421 16:19:45 -- common/autotest_common.sh@111 -- # : 0 00:08:13.421 16:19:45 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:08:13.421 16:19:45 -- common/autotest_common.sh@113 -- # : 0 00:08:13.421 16:19:45 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:08:13.421 16:19:45 -- common/autotest_common.sh@115 -- # : 0 00:08:13.421 16:19:45 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:08:13.421 16:19:45 -- common/autotest_common.sh@117 -- # : 0 00:08:13.421 16:19:45 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:13.421 16:19:45 -- common/autotest_common.sh@119 -- # : 0 00:08:13.421 16:19:45 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:08:13.421 16:19:45 -- common/autotest_common.sh@121 -- # : 1 00:08:13.421 16:19:45 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:08:13.421 16:19:45 -- common/autotest_common.sh@123 -- # : 00:08:13.421 16:19:45 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:13.421 16:19:45 -- common/autotest_common.sh@125 -- # : 0 00:08:13.421 16:19:45 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:08:13.421 16:19:45 -- common/autotest_common.sh@127 -- # : 0 00:08:13.421 16:19:45 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:08:13.421 16:19:45 -- common/autotest_common.sh@129 -- # : 0 00:08:13.421 16:19:45 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:08:13.421 16:19:45 -- common/autotest_common.sh@131 -- # : 0 00:08:13.421 16:19:45 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:08:13.421 16:19:45 -- common/autotest_common.sh@133 -- # : 0 00:08:13.421 16:19:45 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:08:13.421 16:19:45 -- common/autotest_common.sh@135 -- # : 0 00:08:13.421 16:19:45 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:08:13.421 16:19:45 -- common/autotest_common.sh@137 -- # : 00:08:13.421 16:19:45 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:08:13.421 16:19:45 -- common/autotest_common.sh@139 -- # : true 00:08:13.421 16:19:45 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:08:13.421 16:19:45 -- common/autotest_common.sh@141 -- # : 0 00:08:13.421 16:19:45 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:08:13.421 16:19:45 -- common/autotest_common.sh@143 -- # : 0 00:08:13.421 16:19:45 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:08:13.421 16:19:45 -- common/autotest_common.sh@145 -- # : 1 00:08:13.421 16:19:45 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:08:13.421 16:19:45 -- common/autotest_common.sh@147 -- # : 0 00:08:13.421 16:19:45 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:08:13.421 16:19:45 -- common/autotest_common.sh@149 -- # : 0 00:08:13.421 16:19:45 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:08:13.421 16:19:45 -- common/autotest_common.sh@151 -- # : 0 00:08:13.421 16:19:45 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:08:13.421 16:19:45 -- common/autotest_common.sh@153 -- # : 00:08:13.421 16:19:45 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:08:13.421 16:19:45 -- common/autotest_common.sh@155 -- # : 0 00:08:13.421 16:19:45 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:08:13.421 16:19:45 -- common/autotest_common.sh@157 -- # : 0 00:08:13.421 16:19:45 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:08:13.421 16:19:45 -- common/autotest_common.sh@159 -- # : 0 00:08:13.421 16:19:45 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:08:13.421 16:19:45 -- common/autotest_common.sh@161 -- # : 0 00:08:13.421 16:19:45 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:08:13.421 16:19:45 -- common/autotest_common.sh@163 -- # : 0 00:08:13.421 16:19:45 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:08:13.421 16:19:45 -- common/autotest_common.sh@166 -- # : 00:08:13.421 16:19:45 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:08:13.421 16:19:45 -- common/autotest_common.sh@168 -- # : 1 00:08:13.421 16:19:45 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:08:13.421 16:19:45 -- common/autotest_common.sh@170 -- # : 1 00:08:13.421 16:19:45 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:13.421 16:19:45 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:08:13.421 16:19:45 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:08:13.421 16:19:45 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:08:13.422 16:19:45 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:08:13.422 16:19:45 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:13.422 16:19:45 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:13.422 16:19:45 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:13.422 16:19:45 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:13.422 16:19:45 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:13.422 16:19:45 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:13.422 16:19:45 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:13.422 16:19:45 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:13.422 16:19:45 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:13.422 16:19:45 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:08:13.422 16:19:45 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:13.422 16:19:45 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:13.422 16:19:45 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:13.422 16:19:45 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:13.422 16:19:45 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:13.422 16:19:45 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:08:13.422 16:19:45 -- common/autotest_common.sh@199 -- # cat 00:08:13.422 16:19:45 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:08:13.422 16:19:45 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:13.422 16:19:45 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:13.422 16:19:45 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:13.422 16:19:45 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:13.422 16:19:45 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:08:13.422 16:19:45 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:08:13.422 16:19:45 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:08:13.422 16:19:45 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:08:13.422 16:19:45 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:08:13.422 16:19:45 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:08:13.422 16:19:45 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:13.422 16:19:45 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:13.422 16:19:45 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:13.422 16:19:45 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:13.422 16:19:45 -- common/autotest_common.sh@245 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:08:13.422 16:19:45 -- common/autotest_common.sh@245 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:08:13.422 16:19:45 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:13.422 16:19:45 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:13.422 16:19:45 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:08:13.422 16:19:45 -- common/autotest_common.sh@252 -- # export valgrind= 00:08:13.422 16:19:45 -- common/autotest_common.sh@252 -- # valgrind= 00:08:13.422 16:19:45 -- common/autotest_common.sh@258 -- # uname -s 00:08:13.422 16:19:45 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:08:13.422 16:19:45 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:08:13.422 16:19:45 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:08:13.422 16:19:45 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:08:13.422 16:19:45 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:08:13.422 16:19:45 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:08:13.422 16:19:45 -- common/autotest_common.sh@268 -- # MAKE=make 00:08:13.422 16:19:45 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j10 00:08:13.422 16:19:45 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:08:13.422 16:19:45 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:08:13.422 16:19:45 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:08:13.422 16:19:45 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:08:13.422 16:19:45 -- common/autotest_common.sh@289 -- # for i in "$@" 00:08:13.422 16:19:45 -- common/autotest_common.sh@290 -- # case "$i" in 00:08:13.422 16:19:45 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=tcp 00:08:13.422 16:19:45 -- common/autotest_common.sh@307 -- # [[ -z 65041 ]] 00:08:13.422 16:19:45 -- common/autotest_common.sh@307 -- # kill -0 65041 00:08:13.422 16:19:45 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:08:13.422 16:19:45 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:08:13.422 16:19:45 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:08:13.422 16:19:45 -- common/autotest_common.sh@320 -- # local mount target_dir 00:08:13.422 16:19:45 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:08:13.422 16:19:45 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:08:13.422 16:19:45 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:08:13.422 16:19:45 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:08:13.422 16:19:45 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.MOtuLS 00:08:13.422 16:19:45 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:13.422 16:19:45 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:08:13.422 16:19:45 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:08:13.422 16:19:45 -- common/autotest_common.sh@344 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.MOtuLS/tests/target /tmp/spdk.MOtuLS 00:08:13.422 16:19:46 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:08:13.422 16:19:46 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:13.422 16:19:46 -- common/autotest_common.sh@316 -- # df -T 00:08:13.422 16:19:46 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:08:13.422 16:19:46 -- common/autotest_common.sh@350 -- # mounts["$mount"]=devtmpfs 00:08:13.422 16:19:46 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:08:13.422 16:19:46 -- common/autotest_common.sh@351 -- # avails["$mount"]=4194304 00:08:13.422 16:19:46 -- common/autotest_common.sh@351 -- # sizes["$mount"]=4194304 00:08:13.422 16:19:46 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:08:13.422 16:19:46 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:13.422 16:19:46 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:08:13.422 16:19:46 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:08:13.422 16:19:46 -- common/autotest_common.sh@351 -- # avails["$mount"]=6266613760 00:08:13.422 16:19:46 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6267891712 00:08:13.422 16:19:46 -- common/autotest_common.sh@352 -- # uses["$mount"]=1277952 00:08:13.422 16:19:46 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:13.422 16:19:46 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:08:13.422 16:19:46 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:08:13.422 16:19:46 -- common/autotest_common.sh@351 -- # avails["$mount"]=2494353408 00:08:13.422 16:19:46 -- common/autotest_common.sh@351 -- # sizes["$mount"]=2507157504 00:08:13.422 16:19:46 -- common/autotest_common.sh@352 -- # uses["$mount"]=12804096 00:08:13.422 16:19:46 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:13.422 16:19:46 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda5 00:08:13.422 16:19:46 -- common/autotest_common.sh@350 -- # fss["$mount"]=btrfs 00:08:13.422 16:19:46 -- common/autotest_common.sh@351 -- # avails["$mount"]=13812449280 00:08:13.422 16:19:46 -- common/autotest_common.sh@351 -- # sizes["$mount"]=20314062848 00:08:13.422 16:19:46 -- common/autotest_common.sh@352 -- # uses["$mount"]=5211684864 00:08:13.422 16:19:46 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:13.422 16:19:46 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda5 00:08:13.422 16:19:46 -- common/autotest_common.sh@350 -- # fss["$mount"]=btrfs 00:08:13.422 16:19:46 -- common/autotest_common.sh@351 -- # avails["$mount"]=13812449280 00:08:13.422 16:19:46 -- common/autotest_common.sh@351 -- # sizes["$mount"]=20314062848 00:08:13.422 16:19:46 -- common/autotest_common.sh@352 -- # uses["$mount"]=5211684864 00:08:13.422 16:19:46 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:13.422 16:19:46 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda2 00:08:13.422 16:19:46 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext4 00:08:13.422 16:19:46 -- common/autotest_common.sh@351 -- # avails["$mount"]=843546624 00:08:13.422 16:19:46 -- common/autotest_common.sh@351 -- # sizes["$mount"]=1012768768 00:08:13.422 16:19:46 -- common/autotest_common.sh@352 -- # uses["$mount"]=100016128 00:08:13.422 16:19:46 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:13.422 16:19:46 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:08:13.422 16:19:46 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:08:13.422 16:19:46 -- common/autotest_common.sh@351 -- # avails["$mount"]=6267756544 00:08:13.422 16:19:46 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6267891712 00:08:13.422 16:19:46 -- common/autotest_common.sh@352 -- # uses["$mount"]=135168 00:08:13.422 16:19:46 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:13.422 16:19:46 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda3 00:08:13.423 16:19:46 -- common/autotest_common.sh@350 -- # fss["$mount"]=vfat 00:08:13.423 16:19:46 -- common/autotest_common.sh@351 -- # avails["$mount"]=92499968 00:08:13.423 16:19:46 -- common/autotest_common.sh@351 -- # sizes["$mount"]=104607744 00:08:13.423 16:19:46 -- common/autotest_common.sh@352 -- # uses["$mount"]=12107776 00:08:13.423 16:19:46 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:13.423 16:19:46 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:08:13.423 16:19:46 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:08:13.423 16:19:46 -- common/autotest_common.sh@351 -- # avails["$mount"]=1253572608 00:08:13.423 16:19:46 -- common/autotest_common.sh@351 -- # sizes["$mount"]=1253576704 00:08:13.423 16:19:46 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:08:13.423 16:19:46 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:13.423 16:19:46 -- common/autotest_common.sh@350 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt/output 00:08:13.423 16:19:46 -- common/autotest_common.sh@350 -- # fss["$mount"]=fuse.sshfs 00:08:13.423 16:19:46 -- common/autotest_common.sh@351 -- # avails["$mount"]=92814184448 00:08:13.423 16:19:46 -- common/autotest_common.sh@351 -- # sizes["$mount"]=105088212992 00:08:13.423 16:19:46 -- common/autotest_common.sh@352 -- # uses["$mount"]=6888595456 00:08:13.423 16:19:46 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:13.423 16:19:46 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:08:13.423 * Looking for test storage... 00:08:13.423 16:19:46 -- common/autotest_common.sh@357 -- # local target_space new_size 00:08:13.423 16:19:46 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:08:13.423 16:19:46 -- common/autotest_common.sh@361 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:13.423 16:19:46 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:13.423 16:19:46 -- common/autotest_common.sh@361 -- # mount=/home 00:08:13.423 16:19:46 -- common/autotest_common.sh@363 -- # target_space=13812449280 00:08:13.423 16:19:46 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:08:13.423 16:19:46 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:08:13.423 16:19:46 -- common/autotest_common.sh@369 -- # [[ btrfs == tmpfs ]] 00:08:13.423 16:19:46 -- common/autotest_common.sh@369 -- # [[ btrfs == ramfs ]] 00:08:13.423 16:19:46 -- common/autotest_common.sh@369 -- # [[ /home == / ]] 00:08:13.423 16:19:46 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:13.423 16:19:46 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:13.423 16:19:46 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:13.423 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:13.423 16:19:46 -- common/autotest_common.sh@378 -- # return 0 00:08:13.423 16:19:46 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:08:13.423 16:19:46 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:08:13.423 16:19:46 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:13.423 16:19:46 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:13.423 16:19:46 -- common/autotest_common.sh@1673 -- # true 00:08:13.423 16:19:46 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:08:13.423 16:19:46 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:13.423 16:19:46 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:13.423 16:19:46 -- common/autotest_common.sh@27 -- # exec 00:08:13.423 16:19:46 -- common/autotest_common.sh@29 -- # exec 00:08:13.423 16:19:46 -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:13.423 16:19:46 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:13.423 16:19:46 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:13.423 16:19:46 -- common/autotest_common.sh@18 -- # set -x 00:08:13.423 16:19:46 -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:13.423 16:19:46 -- nvmf/common.sh@7 -- # uname -s 00:08:13.423 16:19:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:13.423 16:19:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:13.423 16:19:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:13.423 16:19:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:13.423 16:19:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:13.423 16:19:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:13.423 16:19:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:13.423 16:19:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:13.423 16:19:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:13.423 16:19:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:13.423 16:19:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:08:13.423 16:19:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:08:13.423 16:19:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:13.423 16:19:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:13.423 16:19:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:13.423 16:19:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:13.423 16:19:46 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:13.423 16:19:46 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:13.423 16:19:46 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:13.423 16:19:46 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:13.423 16:19:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.423 16:19:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.423 16:19:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.423 16:19:46 -- paths/export.sh@5 -- # export PATH 00:08:13.423 16:19:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.423 16:19:46 -- nvmf/common.sh@47 -- # : 0 00:08:13.423 16:19:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:13.423 16:19:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:13.423 16:19:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:13.423 16:19:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:13.423 16:19:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:13.423 16:19:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:13.423 16:19:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:13.423 16:19:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:13.423 16:19:46 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:13.423 16:19:46 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:13.423 16:19:46 -- target/filesystem.sh@15 -- # nvmftestinit 00:08:13.423 16:19:46 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:13.423 16:19:46 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:13.423 16:19:46 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:13.423 16:19:46 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:13.423 16:19:46 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:13.423 16:19:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.423 16:19:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:13.423 16:19:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.423 16:19:46 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:08:13.423 16:19:46 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:08:13.423 16:19:46 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:08:13.423 16:19:46 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:08:13.423 16:19:46 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:08:13.423 16:19:46 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:08:13.423 16:19:46 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:13.423 16:19:46 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:13.423 16:19:46 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:13.423 16:19:46 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:13.423 16:19:46 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:13.423 16:19:46 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:13.423 16:19:46 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:13.423 16:19:46 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:13.423 16:19:46 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:13.423 16:19:46 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:13.423 16:19:46 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:13.423 16:19:46 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:13.423 16:19:46 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:13.423 16:19:46 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:13.424 Cannot find device "nvmf_tgt_br" 00:08:13.424 16:19:46 -- nvmf/common.sh@155 -- # true 00:08:13.424 16:19:46 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:13.424 Cannot find device "nvmf_tgt_br2" 00:08:13.424 16:19:46 -- nvmf/common.sh@156 -- # true 00:08:13.424 16:19:46 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:13.424 16:19:46 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:13.424 Cannot find device "nvmf_tgt_br" 00:08:13.424 16:19:46 -- nvmf/common.sh@158 -- # true 00:08:13.424 16:19:46 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:13.424 Cannot find device "nvmf_tgt_br2" 00:08:13.424 16:19:46 -- nvmf/common.sh@159 -- # true 00:08:13.424 16:19:46 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:13.424 16:19:46 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:13.424 16:19:46 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:13.424 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:13.424 16:19:46 -- nvmf/common.sh@162 -- # true 00:08:13.424 16:19:46 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:13.424 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:13.424 16:19:46 -- nvmf/common.sh@163 -- # true 00:08:13.424 16:19:46 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:13.424 16:19:46 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:13.424 16:19:46 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:13.424 16:19:46 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:13.424 16:19:46 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:13.424 16:19:46 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:13.424 16:19:46 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:13.424 16:19:46 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:13.424 16:19:46 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:13.424 16:19:46 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:13.424 16:19:46 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:13.424 16:19:46 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:13.424 16:19:46 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:13.424 16:19:46 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:13.424 16:19:46 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:13.424 16:19:46 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:13.424 16:19:46 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:13.424 16:19:46 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:13.424 16:19:46 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:13.424 16:19:46 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:13.424 16:19:46 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:13.424 16:19:46 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:13.424 16:19:46 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:13.424 16:19:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:13.424 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:13.424 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:08:13.424 00:08:13.424 --- 10.0.0.2 ping statistics --- 00:08:13.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.424 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:08:13.424 16:19:46 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:13.424 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:13.424 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:08:13.424 00:08:13.424 --- 10.0.0.3 ping statistics --- 00:08:13.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.424 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:08:13.424 16:19:46 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:13.424 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:13.424 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:08:13.424 00:08:13.424 --- 10.0.0.1 ping statistics --- 00:08:13.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.424 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:08:13.424 16:19:46 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:13.424 16:19:46 -- nvmf/common.sh@422 -- # return 0 00:08:13.424 16:19:46 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:13.424 16:19:46 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:13.424 16:19:46 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:13.424 16:19:46 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:13.424 16:19:46 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:13.424 16:19:46 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:13.424 16:19:46 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:13.424 16:19:46 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:13.424 16:19:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:13.424 16:19:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:13.424 16:19:46 -- common/autotest_common.sh@10 -- # set +x 00:08:13.424 ************************************ 00:08:13.424 START TEST nvmf_filesystem_no_in_capsule 00:08:13.424 ************************************ 00:08:13.424 16:19:46 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:08:13.424 16:19:46 -- target/filesystem.sh@47 -- # in_capsule=0 00:08:13.424 16:19:46 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:13.424 16:19:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:13.424 16:19:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:13.424 16:19:46 -- common/autotest_common.sh@10 -- # set +x 00:08:13.424 16:19:46 -- nvmf/common.sh@470 -- # nvmfpid=65211 00:08:13.424 16:19:46 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:13.424 16:19:46 -- nvmf/common.sh@471 -- # waitforlisten 65211 00:08:13.424 16:19:46 -- common/autotest_common.sh@817 -- # '[' -z 65211 ']' 00:08:13.424 16:19:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.424 16:19:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:13.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.424 16:19:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.424 16:19:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:13.424 16:19:46 -- common/autotest_common.sh@10 -- # set +x 00:08:13.424 [2024-04-17 16:19:46.521846] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:08:13.424 [2024-04-17 16:19:46.521932] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:13.424 [2024-04-17 16:19:46.658730] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:13.424 [2024-04-17 16:19:46.781545] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:13.424 [2024-04-17 16:19:46.781606] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:13.424 [2024-04-17 16:19:46.781618] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:13.424 [2024-04-17 16:19:46.781627] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:13.424 [2024-04-17 16:19:46.781634] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:13.424 [2024-04-17 16:19:46.781805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.424 [2024-04-17 16:19:46.781942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:13.424 [2024-04-17 16:19:46.782423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:13.424 [2024-04-17 16:19:46.782441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.682 16:19:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:13.682 16:19:47 -- common/autotest_common.sh@850 -- # return 0 00:08:13.682 16:19:47 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:13.682 16:19:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:13.682 16:19:47 -- common/autotest_common.sh@10 -- # set +x 00:08:13.682 16:19:47 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:13.682 16:19:47 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:13.682 16:19:47 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:13.682 16:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:13.682 16:19:47 -- common/autotest_common.sh@10 -- # set +x 00:08:13.682 [2024-04-17 16:19:47.509223] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:13.682 16:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:13.682 16:19:47 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:13.682 16:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:13.682 16:19:47 -- common/autotest_common.sh@10 -- # set +x 00:08:13.682 Malloc1 00:08:13.682 16:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:13.682 16:19:47 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:13.682 16:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:13.682 16:19:47 -- common/autotest_common.sh@10 -- # set +x 00:08:13.682 16:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:13.682 16:19:47 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:13.682 16:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:13.682 16:19:47 -- common/autotest_common.sh@10 -- # set +x 00:08:13.682 16:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:13.682 16:19:47 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:13.682 16:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:13.682 16:19:47 -- common/autotest_common.sh@10 -- # set +x 00:08:13.682 [2024-04-17 16:19:47.706398] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:13.682 16:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:13.682 16:19:47 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:13.682 16:19:47 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:08:13.682 16:19:47 -- common/autotest_common.sh@1365 -- # local bdev_info 00:08:13.682 16:19:47 -- common/autotest_common.sh@1366 -- # local bs 00:08:13.682 16:19:47 -- common/autotest_common.sh@1367 -- # local nb 00:08:13.682 16:19:47 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:13.682 16:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:13.682 16:19:47 -- common/autotest_common.sh@10 -- # set +x 00:08:13.940 16:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:13.940 16:19:47 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:08:13.940 { 00:08:13.940 "aliases": [ 00:08:13.940 "09737306-9973-4820-b89b-e006f23e6860" 00:08:13.940 ], 00:08:13.940 "assigned_rate_limits": { 00:08:13.940 "r_mbytes_per_sec": 0, 00:08:13.940 "rw_ios_per_sec": 0, 00:08:13.940 "rw_mbytes_per_sec": 0, 00:08:13.940 "w_mbytes_per_sec": 0 00:08:13.940 }, 00:08:13.940 "block_size": 512, 00:08:13.940 "claim_type": "exclusive_write", 00:08:13.940 "claimed": true, 00:08:13.940 "driver_specific": {}, 00:08:13.940 "memory_domains": [ 00:08:13.940 { 00:08:13.940 "dma_device_id": "system", 00:08:13.940 "dma_device_type": 1 00:08:13.940 }, 00:08:13.940 { 00:08:13.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.940 "dma_device_type": 2 00:08:13.940 } 00:08:13.940 ], 00:08:13.940 "name": "Malloc1", 00:08:13.940 "num_blocks": 1048576, 00:08:13.940 "product_name": "Malloc disk", 00:08:13.940 "supported_io_types": { 00:08:13.940 "abort": true, 00:08:13.940 "compare": false, 00:08:13.940 "compare_and_write": false, 00:08:13.940 "flush": true, 00:08:13.940 "nvme_admin": false, 00:08:13.940 "nvme_io": false, 00:08:13.940 "read": true, 00:08:13.940 "reset": true, 00:08:13.940 "unmap": true, 00:08:13.940 "write": true, 00:08:13.940 "write_zeroes": true 00:08:13.940 }, 00:08:13.940 "uuid": "09737306-9973-4820-b89b-e006f23e6860", 00:08:13.940 "zoned": false 00:08:13.940 } 00:08:13.940 ]' 00:08:13.940 16:19:47 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:08:13.940 16:19:47 -- common/autotest_common.sh@1369 -- # bs=512 00:08:13.940 16:19:47 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:08:13.940 16:19:47 -- common/autotest_common.sh@1370 -- # nb=1048576 00:08:13.940 16:19:47 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:08:13.940 16:19:47 -- common/autotest_common.sh@1374 -- # echo 512 00:08:13.940 16:19:47 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:13.940 16:19:47 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d --hostid=35bbb10f-fc38-42ac-b909-033700c5e05d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:14.198 16:19:47 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:14.198 16:19:47 -- common/autotest_common.sh@1184 -- # local i=0 00:08:14.198 16:19:47 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:14.198 16:19:47 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:14.198 16:19:47 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:16.105 16:19:50 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:16.105 16:19:50 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:16.105 16:19:50 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:16.105 16:19:50 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:16.105 16:19:50 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:16.105 16:19:50 -- common/autotest_common.sh@1194 -- # return 0 00:08:16.105 16:19:50 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:16.105 16:19:50 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:16.105 16:19:50 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:16.105 16:19:50 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:16.105 16:19:50 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:16.105 16:19:50 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:16.105 16:19:50 -- setup/common.sh@80 -- # echo 536870912 00:08:16.105 16:19:50 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:16.105 16:19:50 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:16.105 16:19:50 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:16.105 16:19:50 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:16.105 16:19:50 -- target/filesystem.sh@69 -- # partprobe 00:08:16.365 16:19:50 -- target/filesystem.sh@70 -- # sleep 1 00:08:17.300 16:19:51 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:17.300 16:19:51 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:17.300 16:19:51 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:17.300 16:19:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:17.300 16:19:51 -- common/autotest_common.sh@10 -- # set +x 00:08:17.300 ************************************ 00:08:17.300 START TEST filesystem_ext4 00:08:17.300 ************************************ 00:08:17.300 16:19:51 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:17.300 16:19:51 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:17.300 16:19:51 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:17.300 16:19:51 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:17.300 16:19:51 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:08:17.300 16:19:51 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:17.300 16:19:51 -- common/autotest_common.sh@914 -- # local i=0 00:08:17.300 16:19:51 -- common/autotest_common.sh@915 -- # local force 00:08:17.300 16:19:51 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:08:17.300 16:19:51 -- common/autotest_common.sh@918 -- # force=-F 00:08:17.300 16:19:51 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:17.300 mke2fs 1.46.5 (30-Dec-2021) 00:08:17.300 Discarding device blocks: 0/522240 done 00:08:17.300 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:17.300 Filesystem UUID: cec62af2-0f97-4d89-a741-e7c08b734710 00:08:17.300 Superblock backups stored on blocks: 00:08:17.300 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:17.300 00:08:17.300 Allocating group tables: 0/64 done 00:08:17.300 Writing inode tables: 0/64 done 00:08:17.558 Creating journal (8192 blocks): done 00:08:17.558 Writing superblocks and filesystem accounting information: 0/64 done 00:08:17.558 00:08:17.558 16:19:51 -- common/autotest_common.sh@931 -- # return 0 00:08:17.558 16:19:51 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:17.558 16:19:51 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:17.558 16:19:51 -- target/filesystem.sh@25 -- # sync 00:08:17.558 16:19:51 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:17.558 16:19:51 -- target/filesystem.sh@27 -- # sync 00:08:17.558 16:19:51 -- target/filesystem.sh@29 -- # i=0 00:08:17.558 16:19:51 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:17.558 16:19:51 -- target/filesystem.sh@37 -- # kill -0 65211 00:08:17.558 16:19:51 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:17.558 16:19:51 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:17.558 16:19:51 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:17.558 16:19:51 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:17.558 00:08:17.558 real 0m0.356s 00:08:17.558 user 0m0.020s 00:08:17.558 sys 0m0.051s 00:08:17.558 16:19:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:17.558 16:19:51 -- common/autotest_common.sh@10 -- # set +x 00:08:17.558 ************************************ 00:08:17.558 END TEST filesystem_ext4 00:08:17.558 ************************************ 00:08:17.816 16:19:51 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:17.816 16:19:51 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:17.816 16:19:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:17.816 16:19:51 -- common/autotest_common.sh@10 -- # set +x 00:08:17.816 ************************************ 00:08:17.816 START TEST filesystem_btrfs 00:08:17.816 ************************************ 00:08:17.816 16:19:51 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:17.816 16:19:51 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:17.816 16:19:51 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:17.816 16:19:51 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:17.816 16:19:51 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:17.816 16:19:51 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:17.816 16:19:51 -- common/autotest_common.sh@914 -- # local i=0 00:08:17.816 16:19:51 -- common/autotest_common.sh@915 -- # local force 00:08:17.816 16:19:51 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:17.816 16:19:51 -- common/autotest_common.sh@920 -- # force=-f 00:08:17.816 16:19:51 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:17.816 btrfs-progs v6.6.2 00:08:17.816 See https://btrfs.readthedocs.io for more information. 00:08:17.816 00:08:17.816 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:17.816 NOTE: several default settings have changed in version 5.15, please make sure 00:08:17.816 this does not affect your deployments: 00:08:17.816 - DUP for metadata (-m dup) 00:08:17.816 - enabled no-holes (-O no-holes) 00:08:17.816 - enabled free-space-tree (-R free-space-tree) 00:08:17.816 00:08:17.816 Label: (null) 00:08:17.816 UUID: 0bda05ab-b2e8-4c8b-a353-0d8e2be8c830 00:08:17.816 Node size: 16384 00:08:17.816 Sector size: 4096 00:08:17.816 Filesystem size: 510.00MiB 00:08:17.816 Block group profiles: 00:08:17.817 Data: single 8.00MiB 00:08:17.817 Metadata: DUP 32.00MiB 00:08:17.817 System: DUP 8.00MiB 00:08:17.817 SSD detected: yes 00:08:17.817 Zoned device: no 00:08:17.817 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:17.817 Runtime features: free-space-tree 00:08:17.817 Checksum: crc32c 00:08:17.817 Number of devices: 1 00:08:17.817 Devices: 00:08:17.817 ID SIZE PATH 00:08:17.817 1 510.00MiB /dev/nvme0n1p1 00:08:17.817 00:08:17.817 16:19:51 -- common/autotest_common.sh@931 -- # return 0 00:08:17.817 16:19:51 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:18.075 16:19:51 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:18.075 16:19:51 -- target/filesystem.sh@25 -- # sync 00:08:18.075 16:19:51 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:18.076 16:19:51 -- target/filesystem.sh@27 -- # sync 00:08:18.076 16:19:51 -- target/filesystem.sh@29 -- # i=0 00:08:18.076 16:19:51 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:18.076 16:19:51 -- target/filesystem.sh@37 -- # kill -0 65211 00:08:18.076 16:19:51 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:18.076 16:19:51 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:18.076 16:19:51 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:18.076 16:19:51 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:18.076 00:08:18.076 real 0m0.246s 00:08:18.076 user 0m0.019s 00:08:18.076 sys 0m0.071s 00:08:18.076 16:19:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:18.076 16:19:51 -- common/autotest_common.sh@10 -- # set +x 00:08:18.076 ************************************ 00:08:18.076 END TEST filesystem_btrfs 00:08:18.076 ************************************ 00:08:18.076 16:19:51 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:18.076 16:19:51 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:18.076 16:19:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:18.076 16:19:51 -- common/autotest_common.sh@10 -- # set +x 00:08:18.076 ************************************ 00:08:18.076 START TEST filesystem_xfs 00:08:18.076 ************************************ 00:08:18.076 16:19:52 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:08:18.076 16:19:52 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:18.076 16:19:52 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:18.076 16:19:52 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:18.076 16:19:52 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:18.076 16:19:52 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:18.076 16:19:52 -- common/autotest_common.sh@914 -- # local i=0 00:08:18.076 16:19:52 -- common/autotest_common.sh@915 -- # local force 00:08:18.076 16:19:52 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:18.076 16:19:52 -- common/autotest_common.sh@920 -- # force=-f 00:08:18.076 16:19:52 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:18.335 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:18.335 = sectsz=512 attr=2, projid32bit=1 00:08:18.335 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:18.335 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:18.335 data = bsize=4096 blocks=130560, imaxpct=25 00:08:18.335 = sunit=0 swidth=0 blks 00:08:18.335 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:18.335 log =internal log bsize=4096 blocks=16384, version=2 00:08:18.335 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:18.335 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:18.901 Discarding blocks...Done. 00:08:18.901 16:19:52 -- common/autotest_common.sh@931 -- # return 0 00:08:18.901 16:19:52 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:21.426 16:19:55 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:21.426 16:19:55 -- target/filesystem.sh@25 -- # sync 00:08:21.426 16:19:55 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:21.426 16:19:55 -- target/filesystem.sh@27 -- # sync 00:08:21.426 16:19:55 -- target/filesystem.sh@29 -- # i=0 00:08:21.426 16:19:55 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:21.426 16:19:55 -- target/filesystem.sh@37 -- # kill -0 65211 00:08:21.426 16:19:55 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:21.426 16:19:55 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:21.426 16:19:55 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:21.426 16:19:55 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:21.426 00:08:21.426 real 0m3.124s 00:08:21.426 user 0m0.020s 00:08:21.426 sys 0m0.056s 00:08:21.426 16:19:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:21.426 16:19:55 -- common/autotest_common.sh@10 -- # set +x 00:08:21.426 ************************************ 00:08:21.426 END TEST filesystem_xfs 00:08:21.426 ************************************ 00:08:21.426 16:19:55 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:21.426 16:19:55 -- target/filesystem.sh@93 -- # sync 00:08:21.426 16:19:55 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:21.426 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:21.426 16:19:55 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:21.426 16:19:55 -- common/autotest_common.sh@1205 -- # local i=0 00:08:21.426 16:19:55 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:21.426 16:19:55 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:21.426 16:19:55 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:21.426 16:19:55 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:21.426 16:19:55 -- common/autotest_common.sh@1217 -- # return 0 00:08:21.426 16:19:55 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:21.426 16:19:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:21.426 16:19:55 -- common/autotest_common.sh@10 -- # set +x 00:08:21.426 16:19:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:21.426 16:19:55 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:21.426 16:19:55 -- target/filesystem.sh@101 -- # killprocess 65211 00:08:21.426 16:19:55 -- common/autotest_common.sh@936 -- # '[' -z 65211 ']' 00:08:21.426 16:19:55 -- common/autotest_common.sh@940 -- # kill -0 65211 00:08:21.426 16:19:55 -- common/autotest_common.sh@941 -- # uname 00:08:21.426 16:19:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:21.426 16:19:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65211 00:08:21.426 16:19:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:21.426 16:19:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:21.426 killing process with pid 65211 00:08:21.426 16:19:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65211' 00:08:21.426 16:19:55 -- common/autotest_common.sh@955 -- # kill 65211 00:08:21.426 16:19:55 -- common/autotest_common.sh@960 -- # wait 65211 00:08:21.994 16:19:55 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:21.994 00:08:21.994 real 0m9.330s 00:08:21.994 user 0m34.965s 00:08:21.994 sys 0m1.757s 00:08:21.994 16:19:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:21.994 16:19:55 -- common/autotest_common.sh@10 -- # set +x 00:08:21.994 ************************************ 00:08:21.994 END TEST nvmf_filesystem_no_in_capsule 00:08:21.994 ************************************ 00:08:21.994 16:19:55 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:21.994 16:19:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:21.994 16:19:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:21.994 16:19:55 -- common/autotest_common.sh@10 -- # set +x 00:08:21.994 ************************************ 00:08:21.994 START TEST nvmf_filesystem_in_capsule 00:08:21.994 ************************************ 00:08:21.994 16:19:55 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:08:21.994 16:19:55 -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:21.994 16:19:55 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:21.994 16:19:55 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:21.994 16:19:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:21.994 16:19:55 -- common/autotest_common.sh@10 -- # set +x 00:08:21.994 16:19:55 -- nvmf/common.sh@470 -- # nvmfpid=65540 00:08:21.994 16:19:55 -- nvmf/common.sh@471 -- # waitforlisten 65540 00:08:21.994 16:19:55 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:21.994 16:19:55 -- common/autotest_common.sh@817 -- # '[' -z 65540 ']' 00:08:21.994 16:19:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.994 16:19:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:21.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.994 16:19:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.994 16:19:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:21.994 16:19:55 -- common/autotest_common.sh@10 -- # set +x 00:08:21.994 [2024-04-17 16:19:55.973382] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:08:21.994 [2024-04-17 16:19:55.973478] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:22.252 [2024-04-17 16:19:56.107898] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:22.252 [2024-04-17 16:19:56.228220] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:22.252 [2024-04-17 16:19:56.228280] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:22.252 [2024-04-17 16:19:56.228291] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:22.252 [2024-04-17 16:19:56.228300] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:22.252 [2024-04-17 16:19:56.228308] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:22.252 [2024-04-17 16:19:56.228477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:22.252 [2024-04-17 16:19:56.228720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:22.252 [2024-04-17 16:19:56.229236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:22.252 [2024-04-17 16:19:56.229290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.214 16:19:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:23.214 16:19:56 -- common/autotest_common.sh@850 -- # return 0 00:08:23.214 16:19:56 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:23.214 16:19:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:23.214 16:19:56 -- common/autotest_common.sh@10 -- # set +x 00:08:23.214 16:19:56 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:23.214 16:19:56 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:23.214 16:19:56 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:23.214 16:19:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:23.214 16:19:56 -- common/autotest_common.sh@10 -- # set +x 00:08:23.214 [2024-04-17 16:19:56.969591] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:23.214 16:19:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:23.214 16:19:56 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:23.214 16:19:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:23.214 16:19:56 -- common/autotest_common.sh@10 -- # set +x 00:08:23.214 Malloc1 00:08:23.214 16:19:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:23.214 16:19:57 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:23.214 16:19:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:23.214 16:19:57 -- common/autotest_common.sh@10 -- # set +x 00:08:23.214 16:19:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:23.214 16:19:57 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:23.214 16:19:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:23.214 16:19:57 -- common/autotest_common.sh@10 -- # set +x 00:08:23.214 16:19:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:23.214 16:19:57 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:23.214 16:19:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:23.214 16:19:57 -- common/autotest_common.sh@10 -- # set +x 00:08:23.214 [2024-04-17 16:19:57.170272] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:23.214 16:19:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:23.214 16:19:57 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:23.214 16:19:57 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:08:23.214 16:19:57 -- common/autotest_common.sh@1365 -- # local bdev_info 00:08:23.214 16:19:57 -- common/autotest_common.sh@1366 -- # local bs 00:08:23.214 16:19:57 -- common/autotest_common.sh@1367 -- # local nb 00:08:23.214 16:19:57 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:23.214 16:19:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:23.214 16:19:57 -- common/autotest_common.sh@10 -- # set +x 00:08:23.214 16:19:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:23.214 16:19:57 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:08:23.214 { 00:08:23.214 "aliases": [ 00:08:23.214 "77bd111f-aa2c-46e4-a167-ddc9501d01ff" 00:08:23.214 ], 00:08:23.214 "assigned_rate_limits": { 00:08:23.214 "r_mbytes_per_sec": 0, 00:08:23.214 "rw_ios_per_sec": 0, 00:08:23.214 "rw_mbytes_per_sec": 0, 00:08:23.214 "w_mbytes_per_sec": 0 00:08:23.214 }, 00:08:23.214 "block_size": 512, 00:08:23.214 "claim_type": "exclusive_write", 00:08:23.214 "claimed": true, 00:08:23.214 "driver_specific": {}, 00:08:23.214 "memory_domains": [ 00:08:23.214 { 00:08:23.214 "dma_device_id": "system", 00:08:23.214 "dma_device_type": 1 00:08:23.214 }, 00:08:23.214 { 00:08:23.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.214 "dma_device_type": 2 00:08:23.214 } 00:08:23.214 ], 00:08:23.214 "name": "Malloc1", 00:08:23.214 "num_blocks": 1048576, 00:08:23.214 "product_name": "Malloc disk", 00:08:23.214 "supported_io_types": { 00:08:23.214 "abort": true, 00:08:23.214 "compare": false, 00:08:23.214 "compare_and_write": false, 00:08:23.214 "flush": true, 00:08:23.214 "nvme_admin": false, 00:08:23.214 "nvme_io": false, 00:08:23.214 "read": true, 00:08:23.214 "reset": true, 00:08:23.214 "unmap": true, 00:08:23.214 "write": true, 00:08:23.214 "write_zeroes": true 00:08:23.214 }, 00:08:23.214 "uuid": "77bd111f-aa2c-46e4-a167-ddc9501d01ff", 00:08:23.214 "zoned": false 00:08:23.214 } 00:08:23.214 ]' 00:08:23.214 16:19:57 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:08:23.214 16:19:57 -- common/autotest_common.sh@1369 -- # bs=512 00:08:23.214 16:19:57 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:08:23.472 16:19:57 -- common/autotest_common.sh@1370 -- # nb=1048576 00:08:23.472 16:19:57 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:08:23.472 16:19:57 -- common/autotest_common.sh@1374 -- # echo 512 00:08:23.472 16:19:57 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:23.472 16:19:57 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d --hostid=35bbb10f-fc38-42ac-b909-033700c5e05d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:23.472 16:19:57 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:23.472 16:19:57 -- common/autotest_common.sh@1184 -- # local i=0 00:08:23.472 16:19:57 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:23.472 16:19:57 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:23.472 16:19:57 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:26.003 16:19:59 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:26.003 16:19:59 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:26.003 16:19:59 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:26.003 16:19:59 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:26.003 16:19:59 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:26.003 16:19:59 -- common/autotest_common.sh@1194 -- # return 0 00:08:26.003 16:19:59 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:26.003 16:19:59 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:26.003 16:19:59 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:26.003 16:19:59 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:26.003 16:19:59 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:26.003 16:19:59 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:26.003 16:19:59 -- setup/common.sh@80 -- # echo 536870912 00:08:26.003 16:19:59 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:26.003 16:19:59 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:26.003 16:19:59 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:26.003 16:19:59 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:26.003 16:19:59 -- target/filesystem.sh@69 -- # partprobe 00:08:26.003 16:19:59 -- target/filesystem.sh@70 -- # sleep 1 00:08:26.937 16:20:00 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:26.937 16:20:00 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:26.937 16:20:00 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:26.937 16:20:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:26.937 16:20:00 -- common/autotest_common.sh@10 -- # set +x 00:08:26.937 ************************************ 00:08:26.937 START TEST filesystem_in_capsule_ext4 00:08:26.937 ************************************ 00:08:26.937 16:20:00 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:26.937 16:20:00 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:26.937 16:20:00 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:26.937 16:20:00 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:26.937 16:20:00 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:08:26.937 16:20:00 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:26.937 16:20:00 -- common/autotest_common.sh@914 -- # local i=0 00:08:26.937 16:20:00 -- common/autotest_common.sh@915 -- # local force 00:08:26.937 16:20:00 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:08:26.937 16:20:00 -- common/autotest_common.sh@918 -- # force=-F 00:08:26.937 16:20:00 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:26.937 mke2fs 1.46.5 (30-Dec-2021) 00:08:26.937 Discarding device blocks: 0/522240 done 00:08:26.937 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:26.937 Filesystem UUID: 7dd85232-e09e-41ef-b92a-739cf3309bdf 00:08:26.937 Superblock backups stored on blocks: 00:08:26.937 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:26.937 00:08:26.937 Allocating group tables: 0/64 done 00:08:26.937 Writing inode tables: 0/64 done 00:08:26.937 Creating journal (8192 blocks): done 00:08:26.937 Writing superblocks and filesystem accounting information: 0/64 done 00:08:26.937 00:08:26.937 16:20:00 -- common/autotest_common.sh@931 -- # return 0 00:08:26.937 16:20:00 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:26.937 16:20:00 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:27.195 16:20:01 -- target/filesystem.sh@25 -- # sync 00:08:27.196 16:20:01 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:27.196 16:20:01 -- target/filesystem.sh@27 -- # sync 00:08:27.196 16:20:01 -- target/filesystem.sh@29 -- # i=0 00:08:27.196 16:20:01 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:27.196 16:20:01 -- target/filesystem.sh@37 -- # kill -0 65540 00:08:27.196 16:20:01 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:27.196 16:20:01 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:27.196 16:20:01 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:27.196 16:20:01 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:27.196 00:08:27.196 real 0m0.434s 00:08:27.196 user 0m0.025s 00:08:27.196 sys 0m0.048s 00:08:27.196 16:20:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:27.196 16:20:01 -- common/autotest_common.sh@10 -- # set +x 00:08:27.196 ************************************ 00:08:27.196 END TEST filesystem_in_capsule_ext4 00:08:27.196 ************************************ 00:08:27.196 16:20:01 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:27.196 16:20:01 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:27.196 16:20:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:27.196 16:20:01 -- common/autotest_common.sh@10 -- # set +x 00:08:27.196 ************************************ 00:08:27.196 START TEST filesystem_in_capsule_btrfs 00:08:27.196 ************************************ 00:08:27.196 16:20:01 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:27.196 16:20:01 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:27.196 16:20:01 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:27.196 16:20:01 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:27.196 16:20:01 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:27.196 16:20:01 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:27.196 16:20:01 -- common/autotest_common.sh@914 -- # local i=0 00:08:27.196 16:20:01 -- common/autotest_common.sh@915 -- # local force 00:08:27.196 16:20:01 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:27.196 16:20:01 -- common/autotest_common.sh@920 -- # force=-f 00:08:27.196 16:20:01 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:27.453 btrfs-progs v6.6.2 00:08:27.453 See https://btrfs.readthedocs.io for more information. 00:08:27.453 00:08:27.453 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:27.453 NOTE: several default settings have changed in version 5.15, please make sure 00:08:27.453 this does not affect your deployments: 00:08:27.453 - DUP for metadata (-m dup) 00:08:27.453 - enabled no-holes (-O no-holes) 00:08:27.453 - enabled free-space-tree (-R free-space-tree) 00:08:27.453 00:08:27.453 Label: (null) 00:08:27.453 UUID: c3b78717-2fd7-457a-b598-edebbad7fb27 00:08:27.453 Node size: 16384 00:08:27.453 Sector size: 4096 00:08:27.453 Filesystem size: 510.00MiB 00:08:27.453 Block group profiles: 00:08:27.453 Data: single 8.00MiB 00:08:27.453 Metadata: DUP 32.00MiB 00:08:27.453 System: DUP 8.00MiB 00:08:27.453 SSD detected: yes 00:08:27.453 Zoned device: no 00:08:27.453 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:27.453 Runtime features: free-space-tree 00:08:27.453 Checksum: crc32c 00:08:27.453 Number of devices: 1 00:08:27.453 Devices: 00:08:27.453 ID SIZE PATH 00:08:27.453 1 510.00MiB /dev/nvme0n1p1 00:08:27.453 00:08:27.453 16:20:01 -- common/autotest_common.sh@931 -- # return 0 00:08:27.453 16:20:01 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:27.453 16:20:01 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:27.453 16:20:01 -- target/filesystem.sh@25 -- # sync 00:08:27.453 16:20:01 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:27.454 16:20:01 -- target/filesystem.sh@27 -- # sync 00:08:27.454 16:20:01 -- target/filesystem.sh@29 -- # i=0 00:08:27.454 16:20:01 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:27.454 16:20:01 -- target/filesystem.sh@37 -- # kill -0 65540 00:08:27.454 16:20:01 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:27.454 16:20:01 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:27.454 16:20:01 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:27.454 16:20:01 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:27.454 00:08:27.454 real 0m0.228s 00:08:27.454 user 0m0.027s 00:08:27.454 sys 0m0.057s 00:08:27.454 16:20:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:27.454 16:20:01 -- common/autotest_common.sh@10 -- # set +x 00:08:27.454 ************************************ 00:08:27.454 END TEST filesystem_in_capsule_btrfs 00:08:27.454 ************************************ 00:08:27.454 16:20:01 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:27.454 16:20:01 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:27.454 16:20:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:27.454 16:20:01 -- common/autotest_common.sh@10 -- # set +x 00:08:27.711 ************************************ 00:08:27.711 START TEST filesystem_in_capsule_xfs 00:08:27.711 ************************************ 00:08:27.711 16:20:01 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:08:27.711 16:20:01 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:27.711 16:20:01 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:27.711 16:20:01 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:27.711 16:20:01 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:27.711 16:20:01 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:27.711 16:20:01 -- common/autotest_common.sh@914 -- # local i=0 00:08:27.711 16:20:01 -- common/autotest_common.sh@915 -- # local force 00:08:27.711 16:20:01 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:27.711 16:20:01 -- common/autotest_common.sh@920 -- # force=-f 00:08:27.711 16:20:01 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:27.711 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:27.711 = sectsz=512 attr=2, projid32bit=1 00:08:27.711 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:27.711 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:27.711 data = bsize=4096 blocks=130560, imaxpct=25 00:08:27.711 = sunit=0 swidth=0 blks 00:08:27.711 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:27.711 log =internal log bsize=4096 blocks=16384, version=2 00:08:27.711 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:27.711 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:28.643 Discarding blocks...Done. 00:08:28.643 16:20:02 -- common/autotest_common.sh@931 -- # return 0 00:08:28.643 16:20:02 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:30.542 16:20:04 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:30.542 16:20:04 -- target/filesystem.sh@25 -- # sync 00:08:30.542 16:20:04 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:30.542 16:20:04 -- target/filesystem.sh@27 -- # sync 00:08:30.542 16:20:04 -- target/filesystem.sh@29 -- # i=0 00:08:30.542 16:20:04 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:30.542 16:20:04 -- target/filesystem.sh@37 -- # kill -0 65540 00:08:30.542 16:20:04 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:30.542 16:20:04 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:30.542 16:20:04 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:30.542 16:20:04 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:30.542 00:08:30.542 real 0m2.604s 00:08:30.542 user 0m0.023s 00:08:30.542 sys 0m0.050s 00:08:30.542 ************************************ 00:08:30.542 END TEST filesystem_in_capsule_xfs 00:08:30.542 ************************************ 00:08:30.542 16:20:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:30.542 16:20:04 -- common/autotest_common.sh@10 -- # set +x 00:08:30.542 16:20:04 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:30.542 16:20:04 -- target/filesystem.sh@93 -- # sync 00:08:30.542 16:20:04 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:30.542 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:30.542 16:20:04 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:30.542 16:20:04 -- common/autotest_common.sh@1205 -- # local i=0 00:08:30.542 16:20:04 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:30.542 16:20:04 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:30.542 16:20:04 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:30.542 16:20:04 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:30.542 16:20:04 -- common/autotest_common.sh@1217 -- # return 0 00:08:30.542 16:20:04 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:30.542 16:20:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:30.542 16:20:04 -- common/autotest_common.sh@10 -- # set +x 00:08:30.542 16:20:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:30.542 16:20:04 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:30.542 16:20:04 -- target/filesystem.sh@101 -- # killprocess 65540 00:08:30.542 16:20:04 -- common/autotest_common.sh@936 -- # '[' -z 65540 ']' 00:08:30.543 16:20:04 -- common/autotest_common.sh@940 -- # kill -0 65540 00:08:30.543 16:20:04 -- common/autotest_common.sh@941 -- # uname 00:08:30.543 16:20:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:30.543 16:20:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65540 00:08:30.543 killing process with pid 65540 00:08:30.543 16:20:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:30.543 16:20:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:30.543 16:20:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65540' 00:08:30.543 16:20:04 -- common/autotest_common.sh@955 -- # kill 65540 00:08:30.543 16:20:04 -- common/autotest_common.sh@960 -- # wait 65540 00:08:30.801 ************************************ 00:08:30.801 END TEST nvmf_filesystem_in_capsule 00:08:30.801 ************************************ 00:08:30.801 16:20:04 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:30.801 00:08:30.801 real 0m8.908s 00:08:30.801 user 0m33.593s 00:08:30.801 sys 0m1.490s 00:08:30.801 16:20:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:30.801 16:20:04 -- common/autotest_common.sh@10 -- # set +x 00:08:31.058 16:20:04 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:31.058 16:20:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:31.058 16:20:04 -- nvmf/common.sh@117 -- # sync 00:08:31.058 16:20:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:31.058 16:20:04 -- nvmf/common.sh@120 -- # set +e 00:08:31.058 16:20:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:31.058 16:20:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:31.058 rmmod nvme_tcp 00:08:31.058 rmmod nvme_fabrics 00:08:31.058 rmmod nvme_keyring 00:08:31.058 16:20:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:31.058 16:20:04 -- nvmf/common.sh@124 -- # set -e 00:08:31.058 16:20:04 -- nvmf/common.sh@125 -- # return 0 00:08:31.058 16:20:04 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:08:31.058 16:20:04 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:31.058 16:20:04 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:31.058 16:20:04 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:31.058 16:20:04 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:31.058 16:20:04 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:31.058 16:20:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.058 16:20:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:31.058 16:20:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.058 16:20:04 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:31.058 00:08:31.058 real 0m19.176s 00:08:31.058 user 1m8.846s 00:08:31.058 sys 0m3.703s 00:08:31.058 16:20:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:31.058 16:20:04 -- common/autotest_common.sh@10 -- # set +x 00:08:31.058 ************************************ 00:08:31.058 END TEST nvmf_filesystem 00:08:31.058 ************************************ 00:08:31.058 16:20:05 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:31.058 16:20:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:31.058 16:20:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:31.058 16:20:05 -- common/autotest_common.sh@10 -- # set +x 00:08:31.058 ************************************ 00:08:31.058 START TEST nvmf_discovery 00:08:31.058 ************************************ 00:08:31.058 16:20:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:31.316 * Looking for test storage... 00:08:31.316 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:31.316 16:20:05 -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:31.316 16:20:05 -- nvmf/common.sh@7 -- # uname -s 00:08:31.316 16:20:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:31.316 16:20:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:31.316 16:20:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:31.316 16:20:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:31.316 16:20:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:31.316 16:20:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:31.316 16:20:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:31.316 16:20:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:31.316 16:20:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:31.316 16:20:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:31.316 16:20:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:08:31.316 16:20:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:08:31.316 16:20:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:31.316 16:20:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:31.316 16:20:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:31.316 16:20:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:31.316 16:20:05 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:31.316 16:20:05 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:31.316 16:20:05 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:31.316 16:20:05 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:31.316 16:20:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.316 16:20:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.316 16:20:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.316 16:20:05 -- paths/export.sh@5 -- # export PATH 00:08:31.316 16:20:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.316 16:20:05 -- nvmf/common.sh@47 -- # : 0 00:08:31.316 16:20:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:31.316 16:20:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:31.316 16:20:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:31.316 16:20:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:31.316 16:20:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:31.316 16:20:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:31.316 16:20:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:31.316 16:20:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:31.316 16:20:05 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:31.316 16:20:05 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:31.316 16:20:05 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:31.316 16:20:05 -- target/discovery.sh@15 -- # hash nvme 00:08:31.316 16:20:05 -- target/discovery.sh@20 -- # nvmftestinit 00:08:31.316 16:20:05 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:31.316 16:20:05 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:31.316 16:20:05 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:31.316 16:20:05 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:31.316 16:20:05 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:31.316 16:20:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.316 16:20:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:31.316 16:20:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.316 16:20:05 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:08:31.316 16:20:05 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:08:31.316 16:20:05 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:08:31.316 16:20:05 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:08:31.316 16:20:05 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:08:31.316 16:20:05 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:08:31.316 16:20:05 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:31.316 16:20:05 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:31.316 16:20:05 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:31.316 16:20:05 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:31.316 16:20:05 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:31.316 16:20:05 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:31.316 16:20:05 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:31.316 16:20:05 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:31.317 16:20:05 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:31.317 16:20:05 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:31.317 16:20:05 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:31.317 16:20:05 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:31.317 16:20:05 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:31.317 16:20:05 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:31.317 Cannot find device "nvmf_tgt_br" 00:08:31.317 16:20:05 -- nvmf/common.sh@155 -- # true 00:08:31.317 16:20:05 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:31.317 Cannot find device "nvmf_tgt_br2" 00:08:31.317 16:20:05 -- nvmf/common.sh@156 -- # true 00:08:31.317 16:20:05 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:31.317 16:20:05 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:31.317 Cannot find device "nvmf_tgt_br" 00:08:31.317 16:20:05 -- nvmf/common.sh@158 -- # true 00:08:31.317 16:20:05 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:31.317 Cannot find device "nvmf_tgt_br2" 00:08:31.317 16:20:05 -- nvmf/common.sh@159 -- # true 00:08:31.317 16:20:05 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:31.317 16:20:05 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:31.317 16:20:05 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:31.317 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:31.317 16:20:05 -- nvmf/common.sh@162 -- # true 00:08:31.317 16:20:05 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:31.317 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:31.317 16:20:05 -- nvmf/common.sh@163 -- # true 00:08:31.317 16:20:05 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:31.575 16:20:05 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:31.575 16:20:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:31.575 16:20:05 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:31.575 16:20:05 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:31.575 16:20:05 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:31.575 16:20:05 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:31.575 16:20:05 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:31.575 16:20:05 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:31.575 16:20:05 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:31.575 16:20:05 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:31.575 16:20:05 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:31.575 16:20:05 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:31.575 16:20:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:31.575 16:20:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:31.575 16:20:05 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:31.575 16:20:05 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:31.575 16:20:05 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:31.575 16:20:05 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:31.575 16:20:05 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:31.575 16:20:05 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:31.575 16:20:05 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:31.575 16:20:05 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:31.575 16:20:05 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:31.575 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:31.575 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:08:31.575 00:08:31.575 --- 10.0.0.2 ping statistics --- 00:08:31.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.575 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:08:31.575 16:20:05 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:31.575 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:31.575 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:08:31.575 00:08:31.575 --- 10.0.0.3 ping statistics --- 00:08:31.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.575 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:08:31.575 16:20:05 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:31.575 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:31.575 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:08:31.575 00:08:31.575 --- 10.0.0.1 ping statistics --- 00:08:31.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.575 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:08:31.575 16:20:05 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:31.575 16:20:05 -- nvmf/common.sh@422 -- # return 0 00:08:31.575 16:20:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:31.575 16:20:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:31.575 16:20:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:31.575 16:20:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:31.575 16:20:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:31.575 16:20:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:31.575 16:20:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:31.833 16:20:05 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:31.833 16:20:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:31.833 16:20:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:31.833 16:20:05 -- common/autotest_common.sh@10 -- # set +x 00:08:31.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.833 16:20:05 -- nvmf/common.sh@470 -- # nvmfpid=66017 00:08:31.833 16:20:05 -- nvmf/common.sh@471 -- # waitforlisten 66017 00:08:31.833 16:20:05 -- common/autotest_common.sh@817 -- # '[' -z 66017 ']' 00:08:31.833 16:20:05 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:31.833 16:20:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.833 16:20:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:31.833 16:20:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.833 16:20:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:31.833 16:20:05 -- common/autotest_common.sh@10 -- # set +x 00:08:31.833 [2024-04-17 16:20:05.681287] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:08:31.833 [2024-04-17 16:20:05.681375] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.833 [2024-04-17 16:20:05.821816] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:32.091 [2024-04-17 16:20:05.959966] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:32.091 [2024-04-17 16:20:05.960306] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:32.091 [2024-04-17 16:20:05.960581] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:32.091 [2024-04-17 16:20:05.960728] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:32.091 [2024-04-17 16:20:05.960786] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:32.091 [2024-04-17 16:20:05.961133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.091 [2024-04-17 16:20:05.961248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.091 [2024-04-17 16:20:05.961310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.091 [2024-04-17 16:20:05.961309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:32.657 16:20:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:32.657 16:20:06 -- common/autotest_common.sh@850 -- # return 0 00:08:32.657 16:20:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:32.657 16:20:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:32.657 16:20:06 -- common/autotest_common.sh@10 -- # set +x 00:08:32.657 16:20:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.657 16:20:06 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:32.657 16:20:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:32.657 16:20:06 -- common/autotest_common.sh@10 -- # set +x 00:08:32.657 [2024-04-17 16:20:06.650895] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:32.657 16:20:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:32.657 16:20:06 -- target/discovery.sh@26 -- # seq 1 4 00:08:32.657 16:20:06 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:32.657 16:20:06 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:32.657 16:20:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:32.657 16:20:06 -- common/autotest_common.sh@10 -- # set +x 00:08:32.657 Null1 00:08:32.657 16:20:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:32.657 16:20:06 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:32.657 16:20:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:32.657 16:20:06 -- common/autotest_common.sh@10 -- # set +x 00:08:32.657 16:20:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:32.657 16:20:06 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:32.657 16:20:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:32.657 16:20:06 -- common/autotest_common.sh@10 -- # set +x 00:08:32.916 16:20:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:32.916 16:20:06 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:32.916 16:20:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:32.916 16:20:06 -- common/autotest_common.sh@10 -- # set +x 00:08:32.916 [2024-04-17 16:20:06.717711] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:32.916 16:20:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:32.916 16:20:06 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:32.916 16:20:06 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:32.916 16:20:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:32.916 16:20:06 -- common/autotest_common.sh@10 -- # set +x 00:08:32.916 Null2 00:08:32.916 16:20:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:32.916 16:20:06 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:32.916 16:20:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:32.916 16:20:06 -- common/autotest_common.sh@10 -- # set +x 00:08:32.916 16:20:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:32.916 16:20:06 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:32.916 16:20:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:32.916 16:20:06 -- common/autotest_common.sh@10 -- # set +x 00:08:32.916 16:20:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:32.916 16:20:06 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:32.916 16:20:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:32.916 16:20:06 -- common/autotest_common.sh@10 -- # set +x 00:08:32.916 16:20:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:32.916 16:20:06 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:32.917 16:20:06 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:32.917 16:20:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:32.917 16:20:06 -- common/autotest_common.sh@10 -- # set +x 00:08:32.917 Null3 00:08:32.917 16:20:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:32.917 16:20:06 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:32.917 16:20:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:32.917 16:20:06 -- common/autotest_common.sh@10 -- # set +x 00:08:32.917 16:20:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:32.917 16:20:06 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:32.917 16:20:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:32.917 16:20:06 -- common/autotest_common.sh@10 -- # set +x 00:08:32.917 16:20:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:32.917 16:20:06 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:32.917 16:20:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:32.917 16:20:06 -- common/autotest_common.sh@10 -- # set +x 00:08:32.917 16:20:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:32.917 16:20:06 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:32.917 16:20:06 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:32.917 16:20:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:32.917 16:20:06 -- common/autotest_common.sh@10 -- # set +x 00:08:32.917 Null4 00:08:32.917 16:20:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:32.917 16:20:06 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:32.917 16:20:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:32.917 16:20:06 -- common/autotest_common.sh@10 -- # set +x 00:08:32.917 16:20:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:32.917 16:20:06 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:32.917 16:20:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:32.917 16:20:06 -- common/autotest_common.sh@10 -- # set +x 00:08:32.917 16:20:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:32.917 16:20:06 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:32.917 16:20:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:32.917 16:20:06 -- common/autotest_common.sh@10 -- # set +x 00:08:32.917 16:20:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:32.917 16:20:06 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:32.917 16:20:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:32.917 16:20:06 -- common/autotest_common.sh@10 -- # set +x 00:08:32.917 16:20:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:32.917 16:20:06 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:32.917 16:20:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:32.917 16:20:06 -- common/autotest_common.sh@10 -- # set +x 00:08:32.917 16:20:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:32.917 16:20:06 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d --hostid=35bbb10f-fc38-42ac-b909-033700c5e05d -t tcp -a 10.0.0.2 -s 4420 00:08:32.917 00:08:32.917 Discovery Log Number of Records 6, Generation counter 6 00:08:32.917 =====Discovery Log Entry 0====== 00:08:32.917 trtype: tcp 00:08:32.917 adrfam: ipv4 00:08:32.917 subtype: current discovery subsystem 00:08:32.917 treq: not required 00:08:32.917 portid: 0 00:08:32.917 trsvcid: 4420 00:08:32.917 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:32.917 traddr: 10.0.0.2 00:08:32.917 eflags: explicit discovery connections, duplicate discovery information 00:08:32.917 sectype: none 00:08:32.917 =====Discovery Log Entry 1====== 00:08:32.917 trtype: tcp 00:08:32.917 adrfam: ipv4 00:08:32.917 subtype: nvme subsystem 00:08:32.917 treq: not required 00:08:32.917 portid: 0 00:08:32.917 trsvcid: 4420 00:08:32.917 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:32.917 traddr: 10.0.0.2 00:08:32.917 eflags: none 00:08:32.917 sectype: none 00:08:32.917 =====Discovery Log Entry 2====== 00:08:32.917 trtype: tcp 00:08:32.917 adrfam: ipv4 00:08:32.917 subtype: nvme subsystem 00:08:32.917 treq: not required 00:08:32.917 portid: 0 00:08:32.917 trsvcid: 4420 00:08:32.917 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:32.917 traddr: 10.0.0.2 00:08:32.917 eflags: none 00:08:32.917 sectype: none 00:08:32.917 =====Discovery Log Entry 3====== 00:08:32.917 trtype: tcp 00:08:32.917 adrfam: ipv4 00:08:32.917 subtype: nvme subsystem 00:08:32.917 treq: not required 00:08:32.917 portid: 0 00:08:32.917 trsvcid: 4420 00:08:32.917 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:32.917 traddr: 10.0.0.2 00:08:32.917 eflags: none 00:08:32.917 sectype: none 00:08:32.917 =====Discovery Log Entry 4====== 00:08:32.917 trtype: tcp 00:08:32.917 adrfam: ipv4 00:08:32.917 subtype: nvme subsystem 00:08:32.917 treq: not required 00:08:32.917 portid: 0 00:08:32.917 trsvcid: 4420 00:08:32.917 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:32.917 traddr: 10.0.0.2 00:08:32.917 eflags: none 00:08:32.917 sectype: none 00:08:32.917 =====Discovery Log Entry 5====== 00:08:32.917 trtype: tcp 00:08:32.917 adrfam: ipv4 00:08:32.917 subtype: discovery subsystem referral 00:08:32.917 treq: not required 00:08:32.917 portid: 0 00:08:32.917 trsvcid: 4430 00:08:32.917 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:32.917 traddr: 10.0.0.2 00:08:32.917 eflags: none 00:08:32.917 sectype: none 00:08:32.917 16:20:06 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:32.917 Perform nvmf subsystem discovery via RPC 00:08:32.917 16:20:06 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:32.917 16:20:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:32.917 16:20:06 -- common/autotest_common.sh@10 -- # set +x 00:08:32.917 [2024-04-17 16:20:06.913671] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:32.917 [ 00:08:32.917 { 00:08:32.917 "allow_any_host": true, 00:08:32.917 "hosts": [], 00:08:32.917 "listen_addresses": [ 00:08:32.917 { 00:08:32.917 "adrfam": "IPv4", 00:08:32.917 "traddr": "10.0.0.2", 00:08:32.917 "transport": "TCP", 00:08:32.917 "trsvcid": "4420", 00:08:32.917 "trtype": "TCP" 00:08:32.917 } 00:08:32.917 ], 00:08:32.917 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:32.917 "subtype": "Discovery" 00:08:32.917 }, 00:08:32.917 { 00:08:32.917 "allow_any_host": true, 00:08:32.917 "hosts": [], 00:08:32.917 "listen_addresses": [ 00:08:32.917 { 00:08:32.917 "adrfam": "IPv4", 00:08:32.917 "traddr": "10.0.0.2", 00:08:32.917 "transport": "TCP", 00:08:32.917 "trsvcid": "4420", 00:08:32.917 "trtype": "TCP" 00:08:32.917 } 00:08:32.917 ], 00:08:32.917 "max_cntlid": 65519, 00:08:32.917 "max_namespaces": 32, 00:08:32.917 "min_cntlid": 1, 00:08:32.917 "model_number": "SPDK bdev Controller", 00:08:32.917 "namespaces": [ 00:08:32.917 { 00:08:32.917 "bdev_name": "Null1", 00:08:32.917 "name": "Null1", 00:08:32.917 "nguid": "17DC0AF95C72424AA4EDD82E35F795D7", 00:08:32.917 "nsid": 1, 00:08:32.917 "uuid": "17dc0af9-5c72-424a-a4ed-d82e35f795d7" 00:08:32.917 } 00:08:32.917 ], 00:08:32.917 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:32.917 "serial_number": "SPDK00000000000001", 00:08:32.917 "subtype": "NVMe" 00:08:32.917 }, 00:08:32.917 { 00:08:32.917 "allow_any_host": true, 00:08:32.917 "hosts": [], 00:08:32.917 "listen_addresses": [ 00:08:32.917 { 00:08:32.917 "adrfam": "IPv4", 00:08:32.917 "traddr": "10.0.0.2", 00:08:32.917 "transport": "TCP", 00:08:32.917 "trsvcid": "4420", 00:08:32.917 "trtype": "TCP" 00:08:32.917 } 00:08:32.917 ], 00:08:32.917 "max_cntlid": 65519, 00:08:32.917 "max_namespaces": 32, 00:08:32.917 "min_cntlid": 1, 00:08:32.917 "model_number": "SPDK bdev Controller", 00:08:32.917 "namespaces": [ 00:08:32.917 { 00:08:32.917 "bdev_name": "Null2", 00:08:32.917 "name": "Null2", 00:08:32.917 "nguid": "DC27CAEA7AFB435BAEBA13707B792A01", 00:08:32.917 "nsid": 1, 00:08:32.917 "uuid": "dc27caea-7afb-435b-aeba-13707b792a01" 00:08:32.917 } 00:08:32.917 ], 00:08:32.917 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:32.917 "serial_number": "SPDK00000000000002", 00:08:32.917 "subtype": "NVMe" 00:08:32.917 }, 00:08:32.917 { 00:08:32.917 "allow_any_host": true, 00:08:32.917 "hosts": [], 00:08:32.917 "listen_addresses": [ 00:08:32.917 { 00:08:32.917 "adrfam": "IPv4", 00:08:32.917 "traddr": "10.0.0.2", 00:08:32.917 "transport": "TCP", 00:08:32.917 "trsvcid": "4420", 00:08:32.917 "trtype": "TCP" 00:08:32.917 } 00:08:32.917 ], 00:08:32.917 "max_cntlid": 65519, 00:08:32.917 "max_namespaces": 32, 00:08:32.917 "min_cntlid": 1, 00:08:32.917 "model_number": "SPDK bdev Controller", 00:08:32.917 "namespaces": [ 00:08:32.917 { 00:08:32.917 "bdev_name": "Null3", 00:08:32.917 "name": "Null3", 00:08:32.917 "nguid": "1A90099E112248498B4D196540B5740B", 00:08:32.917 "nsid": 1, 00:08:32.917 "uuid": "1a90099e-1122-4849-8b4d-196540b5740b" 00:08:32.917 } 00:08:32.917 ], 00:08:32.917 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:32.917 "serial_number": "SPDK00000000000003", 00:08:32.917 "subtype": "NVMe" 00:08:32.917 }, 00:08:32.917 { 00:08:32.917 "allow_any_host": true, 00:08:32.917 "hosts": [], 00:08:32.917 "listen_addresses": [ 00:08:32.917 { 00:08:32.917 "adrfam": "IPv4", 00:08:32.917 "traddr": "10.0.0.2", 00:08:32.917 "transport": "TCP", 00:08:32.917 "trsvcid": "4420", 00:08:32.917 "trtype": "TCP" 00:08:32.917 } 00:08:32.917 ], 00:08:32.917 "max_cntlid": 65519, 00:08:32.918 "max_namespaces": 32, 00:08:32.918 "min_cntlid": 1, 00:08:32.918 "model_number": "SPDK bdev Controller", 00:08:32.918 "namespaces": [ 00:08:32.918 { 00:08:32.918 "bdev_name": "Null4", 00:08:32.918 "name": "Null4", 00:08:32.918 "nguid": "C6347286C12A487B8F37DE1C60956F4A", 00:08:32.918 "nsid": 1, 00:08:32.918 "uuid": "c6347286-c12a-487b-8f37-de1c60956f4a" 00:08:32.918 } 00:08:32.918 ], 00:08:32.918 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:32.918 "serial_number": "SPDK00000000000004", 00:08:32.918 "subtype": "NVMe" 00:08:32.918 } 00:08:32.918 ] 00:08:32.918 16:20:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:32.918 16:20:06 -- target/discovery.sh@42 -- # seq 1 4 00:08:32.918 16:20:06 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:32.918 16:20:06 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:32.918 16:20:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:32.918 16:20:06 -- common/autotest_common.sh@10 -- # set +x 00:08:32.918 16:20:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:32.918 16:20:06 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:32.918 16:20:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:32.918 16:20:06 -- common/autotest_common.sh@10 -- # set +x 00:08:33.176 16:20:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:33.176 16:20:06 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:33.176 16:20:06 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:33.176 16:20:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:33.176 16:20:06 -- common/autotest_common.sh@10 -- # set +x 00:08:33.176 16:20:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:33.176 16:20:06 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:33.176 16:20:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:33.176 16:20:06 -- common/autotest_common.sh@10 -- # set +x 00:08:33.176 16:20:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:33.176 16:20:06 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:33.176 16:20:06 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:33.176 16:20:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:33.176 16:20:06 -- common/autotest_common.sh@10 -- # set +x 00:08:33.176 16:20:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:33.176 16:20:06 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:33.176 16:20:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:33.176 16:20:06 -- common/autotest_common.sh@10 -- # set +x 00:08:33.176 16:20:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:33.176 16:20:06 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:33.176 16:20:06 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:33.176 16:20:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:33.176 16:20:06 -- common/autotest_common.sh@10 -- # set +x 00:08:33.176 16:20:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:33.176 16:20:07 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:33.176 16:20:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:33.176 16:20:07 -- common/autotest_common.sh@10 -- # set +x 00:08:33.176 16:20:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:33.176 16:20:07 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:33.176 16:20:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:33.176 16:20:07 -- common/autotest_common.sh@10 -- # set +x 00:08:33.176 16:20:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:33.176 16:20:07 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:33.176 16:20:07 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:33.176 16:20:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:33.176 16:20:07 -- common/autotest_common.sh@10 -- # set +x 00:08:33.176 16:20:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:33.176 16:20:07 -- target/discovery.sh@49 -- # check_bdevs= 00:08:33.176 16:20:07 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:33.176 16:20:07 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:33.176 16:20:07 -- target/discovery.sh@57 -- # nvmftestfini 00:08:33.176 16:20:07 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:33.176 16:20:07 -- nvmf/common.sh@117 -- # sync 00:08:33.177 16:20:07 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:33.177 16:20:07 -- nvmf/common.sh@120 -- # set +e 00:08:33.177 16:20:07 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:33.177 16:20:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:33.177 rmmod nvme_tcp 00:08:33.177 rmmod nvme_fabrics 00:08:33.177 rmmod nvme_keyring 00:08:33.177 16:20:07 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:33.177 16:20:07 -- nvmf/common.sh@124 -- # set -e 00:08:33.177 16:20:07 -- nvmf/common.sh@125 -- # return 0 00:08:33.177 16:20:07 -- nvmf/common.sh@478 -- # '[' -n 66017 ']' 00:08:33.177 16:20:07 -- nvmf/common.sh@479 -- # killprocess 66017 00:08:33.177 16:20:07 -- common/autotest_common.sh@936 -- # '[' -z 66017 ']' 00:08:33.177 16:20:07 -- common/autotest_common.sh@940 -- # kill -0 66017 00:08:33.177 16:20:07 -- common/autotest_common.sh@941 -- # uname 00:08:33.177 16:20:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:33.177 16:20:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66017 00:08:33.177 16:20:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:33.177 killing process with pid 66017 00:08:33.177 16:20:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:33.177 16:20:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66017' 00:08:33.177 16:20:07 -- common/autotest_common.sh@955 -- # kill 66017 00:08:33.177 [2024-04-17 16:20:07.169497] app.c: 930:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:33.177 16:20:07 -- common/autotest_common.sh@960 -- # wait 66017 00:08:33.435 16:20:07 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:33.435 16:20:07 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:33.435 16:20:07 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:33.435 16:20:07 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:33.435 16:20:07 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:33.435 16:20:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.435 16:20:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:33.435 16:20:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.435 16:20:07 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:33.693 ************************************ 00:08:33.693 END TEST nvmf_discovery 00:08:33.693 ************************************ 00:08:33.693 00:08:33.693 real 0m2.377s 00:08:33.693 user 0m5.990s 00:08:33.693 sys 0m0.626s 00:08:33.693 16:20:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:33.693 16:20:07 -- common/autotest_common.sh@10 -- # set +x 00:08:33.693 16:20:07 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:33.693 16:20:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:33.693 16:20:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:33.693 16:20:07 -- common/autotest_common.sh@10 -- # set +x 00:08:33.693 ************************************ 00:08:33.693 START TEST nvmf_referrals 00:08:33.693 ************************************ 00:08:33.693 16:20:07 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:33.693 * Looking for test storage... 00:08:33.693 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:33.693 16:20:07 -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:33.693 16:20:07 -- nvmf/common.sh@7 -- # uname -s 00:08:33.693 16:20:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:33.693 16:20:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:33.693 16:20:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:33.693 16:20:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:33.693 16:20:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:33.693 16:20:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:33.693 16:20:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:33.693 16:20:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:33.693 16:20:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:33.693 16:20:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:33.693 16:20:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:08:33.693 16:20:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:08:33.693 16:20:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:33.693 16:20:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:33.693 16:20:07 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:33.693 16:20:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:33.693 16:20:07 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:33.693 16:20:07 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.693 16:20:07 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.693 16:20:07 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.693 16:20:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.694 16:20:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.694 16:20:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.694 16:20:07 -- paths/export.sh@5 -- # export PATH 00:08:33.694 16:20:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.694 16:20:07 -- nvmf/common.sh@47 -- # : 0 00:08:33.694 16:20:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:33.694 16:20:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:33.694 16:20:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:33.694 16:20:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:33.694 16:20:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:33.694 16:20:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:33.694 16:20:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:33.694 16:20:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:33.694 16:20:07 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:33.694 16:20:07 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:33.694 16:20:07 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:33.694 16:20:07 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:33.694 16:20:07 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:33.694 16:20:07 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:33.694 16:20:07 -- target/referrals.sh@37 -- # nvmftestinit 00:08:33.694 16:20:07 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:33.694 16:20:07 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:33.694 16:20:07 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:33.694 16:20:07 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:33.694 16:20:07 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:33.694 16:20:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.694 16:20:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:33.694 16:20:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.694 16:20:07 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:08:33.694 16:20:07 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:08:33.694 16:20:07 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:08:33.694 16:20:07 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:08:33.694 16:20:07 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:08:33.694 16:20:07 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:08:33.694 16:20:07 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:33.694 16:20:07 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:33.694 16:20:07 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:33.694 16:20:07 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:33.694 16:20:07 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:33.694 16:20:07 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:33.694 16:20:07 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:33.694 16:20:07 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:33.694 16:20:07 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:33.694 16:20:07 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:33.694 16:20:07 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:33.694 16:20:07 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:33.694 16:20:07 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:33.694 16:20:07 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:33.952 Cannot find device "nvmf_tgt_br" 00:08:33.952 16:20:07 -- nvmf/common.sh@155 -- # true 00:08:33.952 16:20:07 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:33.952 Cannot find device "nvmf_tgt_br2" 00:08:33.952 16:20:07 -- nvmf/common.sh@156 -- # true 00:08:33.952 16:20:07 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:33.952 16:20:07 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:33.952 Cannot find device "nvmf_tgt_br" 00:08:33.952 16:20:07 -- nvmf/common.sh@158 -- # true 00:08:33.952 16:20:07 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:33.952 Cannot find device "nvmf_tgt_br2" 00:08:33.952 16:20:07 -- nvmf/common.sh@159 -- # true 00:08:33.952 16:20:07 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:33.952 16:20:07 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:33.952 16:20:07 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:33.952 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:33.952 16:20:07 -- nvmf/common.sh@162 -- # true 00:08:33.952 16:20:07 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:33.952 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:33.952 16:20:07 -- nvmf/common.sh@163 -- # true 00:08:33.952 16:20:07 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:33.952 16:20:07 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:33.952 16:20:07 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:33.952 16:20:07 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:33.952 16:20:07 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:33.952 16:20:07 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:33.952 16:20:07 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:33.952 16:20:07 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:33.952 16:20:07 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:33.952 16:20:07 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:33.952 16:20:07 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:33.953 16:20:07 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:33.953 16:20:07 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:33.953 16:20:07 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:33.953 16:20:07 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:33.953 16:20:07 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:33.953 16:20:07 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:33.953 16:20:07 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:33.953 16:20:07 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:34.212 16:20:08 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:34.212 16:20:08 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:34.212 16:20:08 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:34.212 16:20:08 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:34.212 16:20:08 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:34.212 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:34.212 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:08:34.212 00:08:34.212 --- 10.0.0.2 ping statistics --- 00:08:34.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.212 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:08:34.212 16:20:08 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:34.212 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:34.212 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:08:34.212 00:08:34.212 --- 10.0.0.3 ping statistics --- 00:08:34.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.212 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:08:34.212 16:20:08 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:34.212 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:34.212 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:08:34.212 00:08:34.212 --- 10.0.0.1 ping statistics --- 00:08:34.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.212 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:08:34.212 16:20:08 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:34.212 16:20:08 -- nvmf/common.sh@422 -- # return 0 00:08:34.212 16:20:08 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:34.212 16:20:08 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:34.212 16:20:08 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:34.212 16:20:08 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:34.212 16:20:08 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:34.212 16:20:08 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:34.212 16:20:08 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:34.212 16:20:08 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:34.212 16:20:08 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:34.212 16:20:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:34.212 16:20:08 -- common/autotest_common.sh@10 -- # set +x 00:08:34.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.212 16:20:08 -- nvmf/common.sh@470 -- # nvmfpid=66258 00:08:34.212 16:20:08 -- nvmf/common.sh@471 -- # waitforlisten 66258 00:08:34.212 16:20:08 -- common/autotest_common.sh@817 -- # '[' -z 66258 ']' 00:08:34.212 16:20:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.212 16:20:08 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:34.212 16:20:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:34.212 16:20:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.212 16:20:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:34.212 16:20:08 -- common/autotest_common.sh@10 -- # set +x 00:08:34.212 [2024-04-17 16:20:08.133438] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:08:34.212 [2024-04-17 16:20:08.133552] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:34.470 [2024-04-17 16:20:08.271034] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:34.470 [2024-04-17 16:20:08.394582] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:34.470 [2024-04-17 16:20:08.394880] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:34.470 [2024-04-17 16:20:08.395070] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:34.470 [2024-04-17 16:20:08.395203] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:34.470 [2024-04-17 16:20:08.395240] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:34.470 [2024-04-17 16:20:08.395473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.471 [2024-04-17 16:20:08.395530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:34.471 [2024-04-17 16:20:08.395599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.471 [2024-04-17 16:20:08.395600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:35.405 16:20:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:35.405 16:20:09 -- common/autotest_common.sh@850 -- # return 0 00:08:35.405 16:20:09 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:35.405 16:20:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:35.405 16:20:09 -- common/autotest_common.sh@10 -- # set +x 00:08:35.405 16:20:09 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:35.405 16:20:09 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:35.405 16:20:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.405 16:20:09 -- common/autotest_common.sh@10 -- # set +x 00:08:35.405 [2024-04-17 16:20:09.207193] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:35.405 16:20:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.405 16:20:09 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:35.405 16:20:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.405 16:20:09 -- common/autotest_common.sh@10 -- # set +x 00:08:35.405 [2024-04-17 16:20:09.240401] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:35.405 16:20:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.405 16:20:09 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:35.405 16:20:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.405 16:20:09 -- common/autotest_common.sh@10 -- # set +x 00:08:35.405 16:20:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.405 16:20:09 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:35.405 16:20:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.405 16:20:09 -- common/autotest_common.sh@10 -- # set +x 00:08:35.405 16:20:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.405 16:20:09 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:35.405 16:20:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.405 16:20:09 -- common/autotest_common.sh@10 -- # set +x 00:08:35.405 16:20:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.405 16:20:09 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:35.405 16:20:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.405 16:20:09 -- target/referrals.sh@48 -- # jq length 00:08:35.405 16:20:09 -- common/autotest_common.sh@10 -- # set +x 00:08:35.405 16:20:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.405 16:20:09 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:35.405 16:20:09 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:35.405 16:20:09 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:35.405 16:20:09 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:35.405 16:20:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.405 16:20:09 -- common/autotest_common.sh@10 -- # set +x 00:08:35.405 16:20:09 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:35.405 16:20:09 -- target/referrals.sh@21 -- # sort 00:08:35.405 16:20:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.405 16:20:09 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:35.405 16:20:09 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:35.405 16:20:09 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:35.405 16:20:09 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:35.405 16:20:09 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:35.405 16:20:09 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:35.405 16:20:09 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d --hostid=35bbb10f-fc38-42ac-b909-033700c5e05d -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:35.405 16:20:09 -- target/referrals.sh@26 -- # sort 00:08:35.663 16:20:09 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:35.663 16:20:09 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:35.663 16:20:09 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:35.664 16:20:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.664 16:20:09 -- common/autotest_common.sh@10 -- # set +x 00:08:35.664 16:20:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.664 16:20:09 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:35.664 16:20:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.664 16:20:09 -- common/autotest_common.sh@10 -- # set +x 00:08:35.664 16:20:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.664 16:20:09 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:35.664 16:20:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.664 16:20:09 -- common/autotest_common.sh@10 -- # set +x 00:08:35.664 16:20:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.664 16:20:09 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:35.664 16:20:09 -- target/referrals.sh@56 -- # jq length 00:08:35.664 16:20:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.664 16:20:09 -- common/autotest_common.sh@10 -- # set +x 00:08:35.664 16:20:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.664 16:20:09 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:35.664 16:20:09 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:35.664 16:20:09 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:35.664 16:20:09 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:35.664 16:20:09 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d --hostid=35bbb10f-fc38-42ac-b909-033700c5e05d -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:35.664 16:20:09 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:35.664 16:20:09 -- target/referrals.sh@26 -- # sort 00:08:35.664 16:20:09 -- target/referrals.sh@26 -- # echo 00:08:35.664 16:20:09 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:35.664 16:20:09 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:35.664 16:20:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.664 16:20:09 -- common/autotest_common.sh@10 -- # set +x 00:08:35.664 16:20:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.664 16:20:09 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:35.664 16:20:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.664 16:20:09 -- common/autotest_common.sh@10 -- # set +x 00:08:35.664 16:20:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.664 16:20:09 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:35.664 16:20:09 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:35.664 16:20:09 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:35.664 16:20:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.664 16:20:09 -- common/autotest_common.sh@10 -- # set +x 00:08:35.664 16:20:09 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:35.664 16:20:09 -- target/referrals.sh@21 -- # sort 00:08:35.664 16:20:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.664 16:20:09 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:35.664 16:20:09 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:35.923 16:20:09 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:35.923 16:20:09 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:35.923 16:20:09 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:35.923 16:20:09 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d --hostid=35bbb10f-fc38-42ac-b909-033700c5e05d -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:35.923 16:20:09 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:35.923 16:20:09 -- target/referrals.sh@26 -- # sort 00:08:35.923 16:20:09 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:35.923 16:20:09 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:35.923 16:20:09 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:35.923 16:20:09 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:35.923 16:20:09 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:35.923 16:20:09 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:35.923 16:20:09 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d --hostid=35bbb10f-fc38-42ac-b909-033700c5e05d -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:35.923 16:20:09 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:35.923 16:20:09 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:35.923 16:20:09 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:35.923 16:20:09 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:35.923 16:20:09 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d --hostid=35bbb10f-fc38-42ac-b909-033700c5e05d -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:35.923 16:20:09 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:35.923 16:20:09 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:35.923 16:20:09 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:35.923 16:20:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.923 16:20:09 -- common/autotest_common.sh@10 -- # set +x 00:08:35.923 16:20:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.923 16:20:09 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:35.923 16:20:09 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:35.923 16:20:09 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:35.923 16:20:09 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:35.923 16:20:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.923 16:20:09 -- target/referrals.sh@21 -- # sort 00:08:35.923 16:20:09 -- common/autotest_common.sh@10 -- # set +x 00:08:35.923 16:20:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:36.181 16:20:09 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:36.181 16:20:09 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:36.181 16:20:09 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:36.181 16:20:09 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:36.181 16:20:09 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:36.181 16:20:09 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d --hostid=35bbb10f-fc38-42ac-b909-033700c5e05d -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:36.181 16:20:09 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:36.181 16:20:09 -- target/referrals.sh@26 -- # sort 00:08:36.181 16:20:10 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:36.181 16:20:10 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:36.181 16:20:10 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:36.181 16:20:10 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:36.181 16:20:10 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:36.181 16:20:10 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d --hostid=35bbb10f-fc38-42ac-b909-033700c5e05d -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:36.181 16:20:10 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:36.181 16:20:10 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:36.181 16:20:10 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:36.181 16:20:10 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:36.181 16:20:10 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:36.181 16:20:10 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d --hostid=35bbb10f-fc38-42ac-b909-033700c5e05d -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:36.181 16:20:10 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:36.181 16:20:10 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:36.181 16:20:10 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:36.181 16:20:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:36.181 16:20:10 -- common/autotest_common.sh@10 -- # set +x 00:08:36.181 16:20:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:36.181 16:20:10 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:36.181 16:20:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:36.181 16:20:10 -- common/autotest_common.sh@10 -- # set +x 00:08:36.181 16:20:10 -- target/referrals.sh@82 -- # jq length 00:08:36.182 16:20:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:36.440 16:20:10 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:36.440 16:20:10 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:36.440 16:20:10 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:36.440 16:20:10 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:36.440 16:20:10 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d --hostid=35bbb10f-fc38-42ac-b909-033700c5e05d -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:36.440 16:20:10 -- target/referrals.sh@26 -- # sort 00:08:36.440 16:20:10 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:36.440 16:20:10 -- target/referrals.sh@26 -- # echo 00:08:36.440 16:20:10 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:36.440 16:20:10 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:36.440 16:20:10 -- target/referrals.sh@86 -- # nvmftestfini 00:08:36.440 16:20:10 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:36.440 16:20:10 -- nvmf/common.sh@117 -- # sync 00:08:36.440 16:20:10 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:36.440 16:20:10 -- nvmf/common.sh@120 -- # set +e 00:08:36.440 16:20:10 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:36.440 16:20:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:36.440 rmmod nvme_tcp 00:08:36.440 rmmod nvme_fabrics 00:08:36.440 rmmod nvme_keyring 00:08:36.440 16:20:10 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:36.440 16:20:10 -- nvmf/common.sh@124 -- # set -e 00:08:36.440 16:20:10 -- nvmf/common.sh@125 -- # return 0 00:08:36.440 16:20:10 -- nvmf/common.sh@478 -- # '[' -n 66258 ']' 00:08:36.440 16:20:10 -- nvmf/common.sh@479 -- # killprocess 66258 00:08:36.440 16:20:10 -- common/autotest_common.sh@936 -- # '[' -z 66258 ']' 00:08:36.440 16:20:10 -- common/autotest_common.sh@940 -- # kill -0 66258 00:08:36.440 16:20:10 -- common/autotest_common.sh@941 -- # uname 00:08:36.440 16:20:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:36.440 16:20:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66258 00:08:36.440 16:20:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:36.440 killing process with pid 66258 00:08:36.440 16:20:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:36.440 16:20:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66258' 00:08:36.440 16:20:10 -- common/autotest_common.sh@955 -- # kill 66258 00:08:36.440 16:20:10 -- common/autotest_common.sh@960 -- # wait 66258 00:08:36.698 16:20:10 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:36.698 16:20:10 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:36.698 16:20:10 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:36.698 16:20:10 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:36.698 16:20:10 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:36.698 16:20:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.698 16:20:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:36.698 16:20:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.698 16:20:10 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:36.698 00:08:36.698 real 0m3.128s 00:08:36.698 user 0m10.013s 00:08:36.698 sys 0m0.873s 00:08:36.698 16:20:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:36.698 16:20:10 -- common/autotest_common.sh@10 -- # set +x 00:08:36.698 ************************************ 00:08:36.698 END TEST nvmf_referrals 00:08:36.698 ************************************ 00:08:36.956 16:20:10 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:36.956 16:20:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:36.956 16:20:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:36.956 16:20:10 -- common/autotest_common.sh@10 -- # set +x 00:08:36.956 ************************************ 00:08:36.956 START TEST nvmf_connect_disconnect 00:08:36.956 ************************************ 00:08:36.956 16:20:10 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:36.956 * Looking for test storage... 00:08:36.956 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:36.956 16:20:10 -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:36.956 16:20:10 -- nvmf/common.sh@7 -- # uname -s 00:08:36.956 16:20:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:36.956 16:20:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:36.956 16:20:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:36.956 16:20:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:36.956 16:20:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:36.956 16:20:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:36.956 16:20:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:36.956 16:20:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:36.956 16:20:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:36.956 16:20:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:36.956 16:20:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:08:36.956 16:20:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:08:36.956 16:20:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:36.956 16:20:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:36.956 16:20:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:36.956 16:20:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:36.956 16:20:10 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:36.956 16:20:10 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:36.956 16:20:10 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:36.956 16:20:10 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:36.956 16:20:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.956 16:20:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.957 16:20:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.957 16:20:10 -- paths/export.sh@5 -- # export PATH 00:08:36.957 16:20:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.957 16:20:10 -- nvmf/common.sh@47 -- # : 0 00:08:36.957 16:20:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:36.957 16:20:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:36.957 16:20:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:36.957 16:20:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:36.957 16:20:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:36.957 16:20:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:36.957 16:20:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:36.957 16:20:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:36.957 16:20:10 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:36.957 16:20:10 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:36.957 16:20:10 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:36.957 16:20:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:36.957 16:20:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:36.957 16:20:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:36.957 16:20:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:36.957 16:20:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:36.957 16:20:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.957 16:20:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:36.957 16:20:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.957 16:20:10 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:08:36.957 16:20:10 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:08:36.957 16:20:10 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:08:36.957 16:20:10 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:08:36.957 16:20:10 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:08:36.957 16:20:10 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:08:36.957 16:20:10 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:36.957 16:20:10 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:36.957 16:20:10 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:36.957 16:20:10 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:36.957 16:20:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:36.957 16:20:10 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:36.957 16:20:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:36.957 16:20:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:36.957 16:20:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:36.957 16:20:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:36.957 16:20:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:36.957 16:20:10 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:36.957 16:20:10 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:36.957 16:20:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:36.957 Cannot find device "nvmf_tgt_br" 00:08:36.957 16:20:10 -- nvmf/common.sh@155 -- # true 00:08:36.957 16:20:10 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:36.957 Cannot find device "nvmf_tgt_br2" 00:08:37.215 16:20:10 -- nvmf/common.sh@156 -- # true 00:08:37.215 16:20:10 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:37.215 16:20:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:37.215 Cannot find device "nvmf_tgt_br" 00:08:37.215 16:20:11 -- nvmf/common.sh@158 -- # true 00:08:37.215 16:20:11 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:37.215 Cannot find device "nvmf_tgt_br2" 00:08:37.215 16:20:11 -- nvmf/common.sh@159 -- # true 00:08:37.215 16:20:11 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:37.215 16:20:11 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:37.215 16:20:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:37.215 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:37.215 16:20:11 -- nvmf/common.sh@162 -- # true 00:08:37.215 16:20:11 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:37.215 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:37.215 16:20:11 -- nvmf/common.sh@163 -- # true 00:08:37.215 16:20:11 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:37.215 16:20:11 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:37.215 16:20:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:37.215 16:20:11 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:37.215 16:20:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:37.215 16:20:11 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:37.215 16:20:11 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:37.215 16:20:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:37.215 16:20:11 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:37.215 16:20:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:37.215 16:20:11 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:37.215 16:20:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:37.215 16:20:11 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:37.215 16:20:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:37.215 16:20:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:37.215 16:20:11 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:37.215 16:20:11 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:37.215 16:20:11 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:37.215 16:20:11 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:37.215 16:20:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:37.215 16:20:11 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:37.215 16:20:11 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:37.215 16:20:11 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:37.215 16:20:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:37.215 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:37.215 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:08:37.215 00:08:37.215 --- 10.0.0.2 ping statistics --- 00:08:37.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.215 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:08:37.472 16:20:11 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:37.472 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:37.472 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:08:37.472 00:08:37.472 --- 10.0.0.3 ping statistics --- 00:08:37.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.472 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:08:37.472 16:20:11 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:37.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:37.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:08:37.472 00:08:37.472 --- 10.0.0.1 ping statistics --- 00:08:37.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.472 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:08:37.472 16:20:11 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:37.472 16:20:11 -- nvmf/common.sh@422 -- # return 0 00:08:37.472 16:20:11 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:37.472 16:20:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:37.473 16:20:11 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:37.473 16:20:11 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:37.473 16:20:11 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:37.473 16:20:11 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:37.473 16:20:11 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:37.473 16:20:11 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:37.473 16:20:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:37.473 16:20:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:37.473 16:20:11 -- common/autotest_common.sh@10 -- # set +x 00:08:37.473 16:20:11 -- nvmf/common.sh@470 -- # nvmfpid=66567 00:08:37.473 16:20:11 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:37.473 16:20:11 -- nvmf/common.sh@471 -- # waitforlisten 66567 00:08:37.473 16:20:11 -- common/autotest_common.sh@817 -- # '[' -z 66567 ']' 00:08:37.473 16:20:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.473 16:20:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:37.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.473 16:20:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.473 16:20:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:37.473 16:20:11 -- common/autotest_common.sh@10 -- # set +x 00:08:37.473 [2024-04-17 16:20:11.356328] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:08:37.473 [2024-04-17 16:20:11.356439] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:37.473 [2024-04-17 16:20:11.499374] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:37.730 [2024-04-17 16:20:11.634412] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:37.730 [2024-04-17 16:20:11.634484] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:37.731 [2024-04-17 16:20:11.634499] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:37.731 [2024-04-17 16:20:11.634510] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:37.731 [2024-04-17 16:20:11.634529] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:37.731 [2024-04-17 16:20:11.634672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:37.731 [2024-04-17 16:20:11.635372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:37.731 [2024-04-17 16:20:11.635488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:37.731 [2024-04-17 16:20:11.635498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.665 16:20:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:38.665 16:20:12 -- common/autotest_common.sh@850 -- # return 0 00:08:38.665 16:20:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:38.665 16:20:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:38.665 16:20:12 -- common/autotest_common.sh@10 -- # set +x 00:08:38.665 16:20:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:38.665 16:20:12 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:38.665 16:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:38.665 16:20:12 -- common/autotest_common.sh@10 -- # set +x 00:08:38.665 [2024-04-17 16:20:12.459757] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:38.665 16:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:38.665 16:20:12 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:38.665 16:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:38.665 16:20:12 -- common/autotest_common.sh@10 -- # set +x 00:08:38.665 16:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:38.665 16:20:12 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:38.665 16:20:12 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:38.665 16:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:38.665 16:20:12 -- common/autotest_common.sh@10 -- # set +x 00:08:38.665 16:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:38.665 16:20:12 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:38.665 16:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:38.665 16:20:12 -- common/autotest_common.sh@10 -- # set +x 00:08:38.665 16:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:38.665 16:20:12 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:38.665 16:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:38.665 16:20:12 -- common/autotest_common.sh@10 -- # set +x 00:08:38.665 [2024-04-17 16:20:12.531904] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:38.665 16:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:38.665 16:20:12 -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:38.665 16:20:12 -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:38.665 16:20:12 -- target/connect_disconnect.sh@34 -- # set +x 00:08:41.193 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.682 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.581 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.111 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.111 16:20:23 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:50.111 16:20:23 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:50.111 16:20:23 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:50.111 16:20:23 -- nvmf/common.sh@117 -- # sync 00:08:50.111 16:20:23 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:50.111 16:20:23 -- nvmf/common.sh@120 -- # set +e 00:08:50.111 16:20:23 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:50.111 16:20:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:50.111 rmmod nvme_tcp 00:08:50.111 rmmod nvme_fabrics 00:08:50.111 rmmod nvme_keyring 00:08:50.111 16:20:23 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:50.111 16:20:23 -- nvmf/common.sh@124 -- # set -e 00:08:50.111 16:20:23 -- nvmf/common.sh@125 -- # return 0 00:08:50.111 16:20:23 -- nvmf/common.sh@478 -- # '[' -n 66567 ']' 00:08:50.111 16:20:23 -- nvmf/common.sh@479 -- # killprocess 66567 00:08:50.111 16:20:23 -- common/autotest_common.sh@936 -- # '[' -z 66567 ']' 00:08:50.111 16:20:23 -- common/autotest_common.sh@940 -- # kill -0 66567 00:08:50.111 16:20:23 -- common/autotest_common.sh@941 -- # uname 00:08:50.111 16:20:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:50.111 16:20:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66567 00:08:50.111 killing process with pid 66567 00:08:50.111 16:20:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:50.111 16:20:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:50.111 16:20:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66567' 00:08:50.111 16:20:23 -- common/autotest_common.sh@955 -- # kill 66567 00:08:50.111 16:20:23 -- common/autotest_common.sh@960 -- # wait 66567 00:08:50.111 16:20:24 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:50.111 16:20:24 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:50.111 16:20:24 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:50.111 16:20:24 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:50.111 16:20:24 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:50.111 16:20:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.111 16:20:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:50.111 16:20:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.370 16:20:24 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:50.370 00:08:50.370 real 0m13.325s 00:08:50.370 user 0m48.799s 00:08:50.370 sys 0m1.879s 00:08:50.370 16:20:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:50.370 ************************************ 00:08:50.370 END TEST nvmf_connect_disconnect 00:08:50.370 ************************************ 00:08:50.370 16:20:24 -- common/autotest_common.sh@10 -- # set +x 00:08:50.370 16:20:24 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:50.370 16:20:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:50.370 16:20:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:50.370 16:20:24 -- common/autotest_common.sh@10 -- # set +x 00:08:50.370 ************************************ 00:08:50.370 START TEST nvmf_multitarget 00:08:50.370 ************************************ 00:08:50.370 16:20:24 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:50.370 * Looking for test storage... 00:08:50.370 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:50.370 16:20:24 -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:50.370 16:20:24 -- nvmf/common.sh@7 -- # uname -s 00:08:50.370 16:20:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:50.370 16:20:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:50.370 16:20:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:50.370 16:20:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:50.370 16:20:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:50.370 16:20:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:50.370 16:20:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:50.370 16:20:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:50.370 16:20:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:50.370 16:20:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:50.370 16:20:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:08:50.370 16:20:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:08:50.370 16:20:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:50.370 16:20:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:50.370 16:20:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:50.370 16:20:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:50.370 16:20:24 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:50.370 16:20:24 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.370 16:20:24 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.370 16:20:24 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.371 16:20:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.371 16:20:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.371 16:20:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.371 16:20:24 -- paths/export.sh@5 -- # export PATH 00:08:50.371 16:20:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.371 16:20:24 -- nvmf/common.sh@47 -- # : 0 00:08:50.371 16:20:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:50.371 16:20:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:50.371 16:20:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:50.371 16:20:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:50.371 16:20:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:50.371 16:20:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:50.371 16:20:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:50.371 16:20:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:50.371 16:20:24 -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:08:50.371 16:20:24 -- target/multitarget.sh@15 -- # nvmftestinit 00:08:50.371 16:20:24 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:50.371 16:20:24 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:50.371 16:20:24 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:50.371 16:20:24 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:50.371 16:20:24 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:50.371 16:20:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.371 16:20:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:50.371 16:20:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.371 16:20:24 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:08:50.371 16:20:24 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:08:50.371 16:20:24 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:08:50.371 16:20:24 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:08:50.371 16:20:24 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:08:50.371 16:20:24 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:08:50.371 16:20:24 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:50.371 16:20:24 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:50.371 16:20:24 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:50.371 16:20:24 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:50.371 16:20:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:50.371 16:20:24 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:50.371 16:20:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:50.371 16:20:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:50.371 16:20:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:50.371 16:20:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:50.371 16:20:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:50.371 16:20:24 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:50.371 16:20:24 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:50.371 16:20:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:50.630 Cannot find device "nvmf_tgt_br" 00:08:50.630 16:20:24 -- nvmf/common.sh@155 -- # true 00:08:50.630 16:20:24 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:50.630 Cannot find device "nvmf_tgt_br2" 00:08:50.630 16:20:24 -- nvmf/common.sh@156 -- # true 00:08:50.630 16:20:24 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:50.630 16:20:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:50.630 Cannot find device "nvmf_tgt_br" 00:08:50.630 16:20:24 -- nvmf/common.sh@158 -- # true 00:08:50.630 16:20:24 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:50.630 Cannot find device "nvmf_tgt_br2" 00:08:50.630 16:20:24 -- nvmf/common.sh@159 -- # true 00:08:50.630 16:20:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:50.630 16:20:24 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:50.630 16:20:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:50.630 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:50.630 16:20:24 -- nvmf/common.sh@162 -- # true 00:08:50.630 16:20:24 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:50.630 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:50.630 16:20:24 -- nvmf/common.sh@163 -- # true 00:08:50.630 16:20:24 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:50.630 16:20:24 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:50.630 16:20:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:50.630 16:20:24 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:50.630 16:20:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:50.630 16:20:24 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:50.630 16:20:24 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:50.630 16:20:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:50.630 16:20:24 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:50.630 16:20:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:50.630 16:20:24 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:50.630 16:20:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:50.630 16:20:24 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:50.630 16:20:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:50.630 16:20:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:50.630 16:20:24 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:50.630 16:20:24 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:50.630 16:20:24 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:50.630 16:20:24 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:50.630 16:20:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:50.888 16:20:24 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:50.888 16:20:24 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:50.888 16:20:24 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:50.888 16:20:24 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:50.888 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:50.888 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:08:50.888 00:08:50.888 --- 10.0.0.2 ping statistics --- 00:08:50.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.888 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:08:50.888 16:20:24 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:50.888 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:50.888 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:08:50.888 00:08:50.888 --- 10.0.0.3 ping statistics --- 00:08:50.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.888 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:08:50.888 16:20:24 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:50.888 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:50.888 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:08:50.888 00:08:50.888 --- 10.0.0.1 ping statistics --- 00:08:50.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.889 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:08:50.889 16:20:24 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:50.889 16:20:24 -- nvmf/common.sh@422 -- # return 0 00:08:50.889 16:20:24 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:50.889 16:20:24 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:50.889 16:20:24 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:50.889 16:20:24 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:50.889 16:20:24 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:50.889 16:20:24 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:50.889 16:20:24 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:50.889 16:20:24 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:50.889 16:20:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:50.889 16:20:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:50.889 16:20:24 -- common/autotest_common.sh@10 -- # set +x 00:08:50.889 16:20:24 -- nvmf/common.sh@470 -- # nvmfpid=66976 00:08:50.889 16:20:24 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:50.889 16:20:24 -- nvmf/common.sh@471 -- # waitforlisten 66976 00:08:50.889 16:20:24 -- common/autotest_common.sh@817 -- # '[' -z 66976 ']' 00:08:50.889 16:20:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.889 16:20:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:50.889 16:20:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.889 16:20:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:50.889 16:20:24 -- common/autotest_common.sh@10 -- # set +x 00:08:50.889 [2024-04-17 16:20:24.810258] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:08:50.889 [2024-04-17 16:20:24.810362] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.147 [2024-04-17 16:20:24.949949] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:51.147 [2024-04-17 16:20:25.088966] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:51.147 [2024-04-17 16:20:25.089047] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:51.147 [2024-04-17 16:20:25.089067] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:51.147 [2024-04-17 16:20:25.089082] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:51.147 [2024-04-17 16:20:25.089091] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:51.147 [2024-04-17 16:20:25.089246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.147 [2024-04-17 16:20:25.089748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:51.147 [2024-04-17 16:20:25.089998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:51.147 [2024-04-17 16:20:25.090047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.713 16:20:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:51.713 16:20:25 -- common/autotest_common.sh@850 -- # return 0 00:08:51.713 16:20:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:51.713 16:20:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:51.713 16:20:25 -- common/autotest_common.sh@10 -- # set +x 00:08:51.971 16:20:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:51.971 16:20:25 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:51.971 16:20:25 -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:51.971 16:20:25 -- target/multitarget.sh@21 -- # jq length 00:08:51.971 16:20:25 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:51.971 16:20:25 -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:52.229 "nvmf_tgt_1" 00:08:52.229 16:20:26 -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:52.229 "nvmf_tgt_2" 00:08:52.229 16:20:26 -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:52.229 16:20:26 -- target/multitarget.sh@28 -- # jq length 00:08:52.487 16:20:26 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:52.487 16:20:26 -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:52.487 true 00:08:52.487 16:20:26 -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:52.745 true 00:08:52.745 16:20:26 -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:52.745 16:20:26 -- target/multitarget.sh@35 -- # jq length 00:08:53.003 16:20:26 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:53.003 16:20:26 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:53.003 16:20:26 -- target/multitarget.sh@41 -- # nvmftestfini 00:08:53.003 16:20:26 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:53.003 16:20:26 -- nvmf/common.sh@117 -- # sync 00:08:53.003 16:20:26 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:53.003 16:20:26 -- nvmf/common.sh@120 -- # set +e 00:08:53.004 16:20:26 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:53.004 16:20:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:53.004 rmmod nvme_tcp 00:08:53.004 rmmod nvme_fabrics 00:08:53.004 rmmod nvme_keyring 00:08:53.004 16:20:26 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:53.004 16:20:26 -- nvmf/common.sh@124 -- # set -e 00:08:53.004 16:20:26 -- nvmf/common.sh@125 -- # return 0 00:08:53.004 16:20:26 -- nvmf/common.sh@478 -- # '[' -n 66976 ']' 00:08:53.004 16:20:26 -- nvmf/common.sh@479 -- # killprocess 66976 00:08:53.004 16:20:26 -- common/autotest_common.sh@936 -- # '[' -z 66976 ']' 00:08:53.004 16:20:26 -- common/autotest_common.sh@940 -- # kill -0 66976 00:08:53.004 16:20:26 -- common/autotest_common.sh@941 -- # uname 00:08:53.004 16:20:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:53.004 16:20:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66976 00:08:53.004 16:20:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:53.004 16:20:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:53.004 killing process with pid 66976 00:08:53.004 16:20:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66976' 00:08:53.004 16:20:26 -- common/autotest_common.sh@955 -- # kill 66976 00:08:53.004 16:20:26 -- common/autotest_common.sh@960 -- # wait 66976 00:08:53.262 16:20:27 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:53.262 16:20:27 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:53.262 16:20:27 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:53.262 16:20:27 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:53.262 16:20:27 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:53.262 16:20:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.262 16:20:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:53.262 16:20:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.262 16:20:27 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:53.262 00:08:53.262 real 0m3.003s 00:08:53.262 user 0m9.741s 00:08:53.262 sys 0m0.719s 00:08:53.262 16:20:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:53.262 16:20:27 -- common/autotest_common.sh@10 -- # set +x 00:08:53.262 ************************************ 00:08:53.262 END TEST nvmf_multitarget 00:08:53.262 ************************************ 00:08:53.520 16:20:27 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:53.520 16:20:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:53.520 16:20:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:53.520 16:20:27 -- common/autotest_common.sh@10 -- # set +x 00:08:53.520 ************************************ 00:08:53.520 START TEST nvmf_rpc 00:08:53.520 ************************************ 00:08:53.520 16:20:27 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:53.520 * Looking for test storage... 00:08:53.520 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:53.520 16:20:27 -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:53.520 16:20:27 -- nvmf/common.sh@7 -- # uname -s 00:08:53.520 16:20:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:53.520 16:20:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:53.520 16:20:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:53.520 16:20:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:53.520 16:20:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:53.520 16:20:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:53.520 16:20:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:53.520 16:20:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:53.520 16:20:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:53.520 16:20:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:53.521 16:20:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:08:53.521 16:20:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:08:53.521 16:20:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:53.521 16:20:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:53.521 16:20:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:53.521 16:20:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:53.521 16:20:27 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:53.521 16:20:27 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:53.521 16:20:27 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:53.521 16:20:27 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:53.521 16:20:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.521 16:20:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.521 16:20:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.521 16:20:27 -- paths/export.sh@5 -- # export PATH 00:08:53.521 16:20:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.521 16:20:27 -- nvmf/common.sh@47 -- # : 0 00:08:53.521 16:20:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:53.521 16:20:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:53.521 16:20:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:53.521 16:20:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:53.521 16:20:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:53.521 16:20:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:53.521 16:20:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:53.521 16:20:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:53.521 16:20:27 -- target/rpc.sh@11 -- # loops=5 00:08:53.521 16:20:27 -- target/rpc.sh@23 -- # nvmftestinit 00:08:53.521 16:20:27 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:53.521 16:20:27 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:53.521 16:20:27 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:53.521 16:20:27 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:53.521 16:20:27 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:53.521 16:20:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.521 16:20:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:53.521 16:20:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.521 16:20:27 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:08:53.521 16:20:27 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:08:53.521 16:20:27 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:08:53.521 16:20:27 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:08:53.521 16:20:27 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:08:53.521 16:20:27 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:08:53.521 16:20:27 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:53.521 16:20:27 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:53.521 16:20:27 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:53.521 16:20:27 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:53.521 16:20:27 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:53.521 16:20:27 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:53.521 16:20:27 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:53.521 16:20:27 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:53.521 16:20:27 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:53.521 16:20:27 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:53.521 16:20:27 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:53.521 16:20:27 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:53.521 16:20:27 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:53.521 16:20:27 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:53.521 Cannot find device "nvmf_tgt_br" 00:08:53.521 16:20:27 -- nvmf/common.sh@155 -- # true 00:08:53.521 16:20:27 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:53.779 Cannot find device "nvmf_tgt_br2" 00:08:53.779 16:20:27 -- nvmf/common.sh@156 -- # true 00:08:53.779 16:20:27 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:53.779 16:20:27 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:53.779 Cannot find device "nvmf_tgt_br" 00:08:53.779 16:20:27 -- nvmf/common.sh@158 -- # true 00:08:53.779 16:20:27 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:53.779 Cannot find device "nvmf_tgt_br2" 00:08:53.779 16:20:27 -- nvmf/common.sh@159 -- # true 00:08:53.779 16:20:27 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:53.779 16:20:27 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:53.779 16:20:27 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:53.779 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:53.779 16:20:27 -- nvmf/common.sh@162 -- # true 00:08:53.779 16:20:27 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:53.779 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:53.779 16:20:27 -- nvmf/common.sh@163 -- # true 00:08:53.779 16:20:27 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:53.779 16:20:27 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:53.779 16:20:27 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:53.780 16:20:27 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:53.780 16:20:27 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:53.780 16:20:27 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:53.780 16:20:27 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:53.780 16:20:27 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:53.780 16:20:27 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:53.780 16:20:27 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:53.780 16:20:27 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:53.780 16:20:27 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:53.780 16:20:27 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:53.780 16:20:27 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:53.780 16:20:27 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:53.780 16:20:27 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:53.780 16:20:27 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:53.780 16:20:27 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:53.780 16:20:27 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:53.780 16:20:27 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:53.780 16:20:27 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:54.038 16:20:27 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:54.038 16:20:27 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:54.038 16:20:27 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:54.038 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:54.038 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:08:54.038 00:08:54.038 --- 10.0.0.2 ping statistics --- 00:08:54.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.038 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:08:54.038 16:20:27 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:54.038 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:54.038 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:08:54.038 00:08:54.038 --- 10.0.0.3 ping statistics --- 00:08:54.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.038 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:08:54.038 16:20:27 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:54.038 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:54.038 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:08:54.038 00:08:54.038 --- 10.0.0.1 ping statistics --- 00:08:54.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.038 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:08:54.038 16:20:27 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:54.038 16:20:27 -- nvmf/common.sh@422 -- # return 0 00:08:54.038 16:20:27 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:54.038 16:20:27 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:54.038 16:20:27 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:54.038 16:20:27 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:54.038 16:20:27 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:54.038 16:20:27 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:54.038 16:20:27 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:54.038 16:20:27 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:08:54.038 16:20:27 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:54.038 16:20:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:54.038 16:20:27 -- common/autotest_common.sh@10 -- # set +x 00:08:54.038 16:20:27 -- nvmf/common.sh@470 -- # nvmfpid=67210 00:08:54.039 16:20:27 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:54.039 16:20:27 -- nvmf/common.sh@471 -- # waitforlisten 67210 00:08:54.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.039 16:20:27 -- common/autotest_common.sh@817 -- # '[' -z 67210 ']' 00:08:54.039 16:20:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.039 16:20:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:54.039 16:20:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.039 16:20:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:54.039 16:20:27 -- common/autotest_common.sh@10 -- # set +x 00:08:54.039 [2024-04-17 16:20:27.956502] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:08:54.039 [2024-04-17 16:20:27.957349] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.297 [2024-04-17 16:20:28.102301] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:54.297 [2024-04-17 16:20:28.268862] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:54.297 [2024-04-17 16:20:28.269319] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:54.297 [2024-04-17 16:20:28.269511] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:54.297 [2024-04-17 16:20:28.269675] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:54.297 [2024-04-17 16:20:28.269919] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:54.297 [2024-04-17 16:20:28.270123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.297 [2024-04-17 16:20:28.270229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:54.297 [2024-04-17 16:20:28.270388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:54.297 [2024-04-17 16:20:28.270408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.232 16:20:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:55.232 16:20:29 -- common/autotest_common.sh@850 -- # return 0 00:08:55.232 16:20:29 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:55.232 16:20:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:55.232 16:20:29 -- common/autotest_common.sh@10 -- # set +x 00:08:55.232 16:20:29 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:55.232 16:20:29 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:08:55.232 16:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:55.232 16:20:29 -- common/autotest_common.sh@10 -- # set +x 00:08:55.232 16:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:55.232 16:20:29 -- target/rpc.sh@26 -- # stats='{ 00:08:55.232 "poll_groups": [ 00:08:55.232 { 00:08:55.232 "admin_qpairs": 0, 00:08:55.232 "completed_nvme_io": 0, 00:08:55.232 "current_admin_qpairs": 0, 00:08:55.232 "current_io_qpairs": 0, 00:08:55.232 "io_qpairs": 0, 00:08:55.232 "name": "nvmf_tgt_poll_group_0", 00:08:55.232 "pending_bdev_io": 0, 00:08:55.232 "transports": [] 00:08:55.232 }, 00:08:55.232 { 00:08:55.232 "admin_qpairs": 0, 00:08:55.232 "completed_nvme_io": 0, 00:08:55.232 "current_admin_qpairs": 0, 00:08:55.232 "current_io_qpairs": 0, 00:08:55.232 "io_qpairs": 0, 00:08:55.232 "name": "nvmf_tgt_poll_group_1", 00:08:55.232 "pending_bdev_io": 0, 00:08:55.232 "transports": [] 00:08:55.232 }, 00:08:55.232 { 00:08:55.232 "admin_qpairs": 0, 00:08:55.232 "completed_nvme_io": 0, 00:08:55.232 "current_admin_qpairs": 0, 00:08:55.232 "current_io_qpairs": 0, 00:08:55.232 "io_qpairs": 0, 00:08:55.232 "name": "nvmf_tgt_poll_group_2", 00:08:55.232 "pending_bdev_io": 0, 00:08:55.232 "transports": [] 00:08:55.232 }, 00:08:55.232 { 00:08:55.232 "admin_qpairs": 0, 00:08:55.232 "completed_nvme_io": 0, 00:08:55.232 "current_admin_qpairs": 0, 00:08:55.232 "current_io_qpairs": 0, 00:08:55.232 "io_qpairs": 0, 00:08:55.232 "name": "nvmf_tgt_poll_group_3", 00:08:55.232 "pending_bdev_io": 0, 00:08:55.232 "transports": [] 00:08:55.232 } 00:08:55.232 ], 00:08:55.232 "tick_rate": 2200000000 00:08:55.232 }' 00:08:55.232 16:20:29 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:08:55.232 16:20:29 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:08:55.232 16:20:29 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:08:55.232 16:20:29 -- target/rpc.sh@15 -- # wc -l 00:08:55.232 16:20:29 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:08:55.232 16:20:29 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:08:55.232 16:20:29 -- target/rpc.sh@29 -- # [[ null == null ]] 00:08:55.232 16:20:29 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:55.232 16:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:55.232 16:20:29 -- common/autotest_common.sh@10 -- # set +x 00:08:55.232 [2024-04-17 16:20:29.183600] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:55.232 16:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:55.232 16:20:29 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:08:55.232 16:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:55.232 16:20:29 -- common/autotest_common.sh@10 -- # set +x 00:08:55.232 16:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:55.232 16:20:29 -- target/rpc.sh@33 -- # stats='{ 00:08:55.232 "poll_groups": [ 00:08:55.232 { 00:08:55.232 "admin_qpairs": 0, 00:08:55.232 "completed_nvme_io": 0, 00:08:55.232 "current_admin_qpairs": 0, 00:08:55.232 "current_io_qpairs": 0, 00:08:55.232 "io_qpairs": 0, 00:08:55.232 "name": "nvmf_tgt_poll_group_0", 00:08:55.232 "pending_bdev_io": 0, 00:08:55.232 "transports": [ 00:08:55.232 { 00:08:55.232 "trtype": "TCP" 00:08:55.232 } 00:08:55.232 ] 00:08:55.232 }, 00:08:55.232 { 00:08:55.232 "admin_qpairs": 0, 00:08:55.232 "completed_nvme_io": 0, 00:08:55.232 "current_admin_qpairs": 0, 00:08:55.232 "current_io_qpairs": 0, 00:08:55.232 "io_qpairs": 0, 00:08:55.232 "name": "nvmf_tgt_poll_group_1", 00:08:55.232 "pending_bdev_io": 0, 00:08:55.232 "transports": [ 00:08:55.232 { 00:08:55.232 "trtype": "TCP" 00:08:55.232 } 00:08:55.232 ] 00:08:55.232 }, 00:08:55.232 { 00:08:55.232 "admin_qpairs": 0, 00:08:55.232 "completed_nvme_io": 0, 00:08:55.232 "current_admin_qpairs": 0, 00:08:55.232 "current_io_qpairs": 0, 00:08:55.232 "io_qpairs": 0, 00:08:55.232 "name": "nvmf_tgt_poll_group_2", 00:08:55.232 "pending_bdev_io": 0, 00:08:55.232 "transports": [ 00:08:55.232 { 00:08:55.232 "trtype": "TCP" 00:08:55.232 } 00:08:55.232 ] 00:08:55.232 }, 00:08:55.232 { 00:08:55.232 "admin_qpairs": 0, 00:08:55.232 "completed_nvme_io": 0, 00:08:55.232 "current_admin_qpairs": 0, 00:08:55.232 "current_io_qpairs": 0, 00:08:55.232 "io_qpairs": 0, 00:08:55.232 "name": "nvmf_tgt_poll_group_3", 00:08:55.232 "pending_bdev_io": 0, 00:08:55.232 "transports": [ 00:08:55.232 { 00:08:55.232 "trtype": "TCP" 00:08:55.232 } 00:08:55.232 ] 00:08:55.232 } 00:08:55.232 ], 00:08:55.232 "tick_rate": 2200000000 00:08:55.232 }' 00:08:55.232 16:20:29 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:08:55.232 16:20:29 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:55.232 16:20:29 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:55.232 16:20:29 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:55.490 16:20:29 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:08:55.490 16:20:29 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:08:55.490 16:20:29 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:55.490 16:20:29 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:55.490 16:20:29 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:55.490 16:20:29 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:08:55.490 16:20:29 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:08:55.490 16:20:29 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:08:55.490 16:20:29 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:08:55.490 16:20:29 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:08:55.491 16:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:55.491 16:20:29 -- common/autotest_common.sh@10 -- # set +x 00:08:55.491 Malloc1 00:08:55.491 16:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:55.491 16:20:29 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:55.491 16:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:55.491 16:20:29 -- common/autotest_common.sh@10 -- # set +x 00:08:55.491 16:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:55.491 16:20:29 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:55.491 16:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:55.491 16:20:29 -- common/autotest_common.sh@10 -- # set +x 00:08:55.491 16:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:55.491 16:20:29 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:08:55.491 16:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:55.491 16:20:29 -- common/autotest_common.sh@10 -- # set +x 00:08:55.491 16:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:55.491 16:20:29 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:55.491 16:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:55.491 16:20:29 -- common/autotest_common.sh@10 -- # set +x 00:08:55.491 [2024-04-17 16:20:29.416546] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:55.491 16:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:55.491 16:20:29 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d --hostid=35bbb10f-fc38-42ac-b909-033700c5e05d -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d -a 10.0.0.2 -s 4420 00:08:55.491 16:20:29 -- common/autotest_common.sh@638 -- # local es=0 00:08:55.491 16:20:29 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d --hostid=35bbb10f-fc38-42ac-b909-033700c5e05d -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d -a 10.0.0.2 -s 4420 00:08:55.491 16:20:29 -- common/autotest_common.sh@626 -- # local arg=nvme 00:08:55.491 16:20:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:55.491 16:20:29 -- common/autotest_common.sh@630 -- # type -t nvme 00:08:55.491 16:20:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:55.491 16:20:29 -- common/autotest_common.sh@632 -- # type -P nvme 00:08:55.491 16:20:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:55.491 16:20:29 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:08:55.491 16:20:29 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:08:55.491 16:20:29 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d --hostid=35bbb10f-fc38-42ac-b909-033700c5e05d -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d -a 10.0.0.2 -s 4420 00:08:55.491 [2024-04-17 16:20:29.434791] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d' 00:08:55.491 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:55.491 could not add new controller: failed to write to nvme-fabrics device 00:08:55.491 16:20:29 -- common/autotest_common.sh@641 -- # es=1 00:08:55.491 16:20:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:55.491 16:20:29 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:55.491 16:20:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:55.491 16:20:29 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:08:55.491 16:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:55.491 16:20:29 -- common/autotest_common.sh@10 -- # set +x 00:08:55.491 16:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:55.491 16:20:29 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d --hostid=35bbb10f-fc38-42ac-b909-033700c5e05d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:55.749 16:20:29 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:08:55.749 16:20:29 -- common/autotest_common.sh@1184 -- # local i=0 00:08:55.749 16:20:29 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:55.749 16:20:29 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:55.749 16:20:29 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:57.647 16:20:31 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:57.647 16:20:31 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:57.647 16:20:31 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:57.647 16:20:31 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:57.647 16:20:31 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:57.647 16:20:31 -- common/autotest_common.sh@1194 -- # return 0 00:08:57.647 16:20:31 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:57.647 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.647 16:20:31 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:57.647 16:20:31 -- common/autotest_common.sh@1205 -- # local i=0 00:08:57.647 16:20:31 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:57.647 16:20:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:57.647 16:20:31 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:57.647 16:20:31 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:57.904 16:20:31 -- common/autotest_common.sh@1217 -- # return 0 00:08:57.904 16:20:31 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:08:57.904 16:20:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:57.904 16:20:31 -- common/autotest_common.sh@10 -- # set +x 00:08:57.904 16:20:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:57.904 16:20:31 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d --hostid=35bbb10f-fc38-42ac-b909-033700c5e05d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:57.904 16:20:31 -- common/autotest_common.sh@638 -- # local es=0 00:08:57.905 16:20:31 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d --hostid=35bbb10f-fc38-42ac-b909-033700c5e05d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:57.905 16:20:31 -- common/autotest_common.sh@626 -- # local arg=nvme 00:08:57.905 16:20:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:57.905 16:20:31 -- common/autotest_common.sh@630 -- # type -t nvme 00:08:57.905 16:20:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:57.905 16:20:31 -- common/autotest_common.sh@632 -- # type -P nvme 00:08:57.905 16:20:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:57.905 16:20:31 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:08:57.905 16:20:31 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:08:57.905 16:20:31 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d --hostid=35bbb10f-fc38-42ac-b909-033700c5e05d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:57.905 [2024-04-17 16:20:31.728026] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d' 00:08:57.905 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:57.905 could not add new controller: failed to write to nvme-fabrics device 00:08:57.905 16:20:31 -- common/autotest_common.sh@641 -- # es=1 00:08:57.905 16:20:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:57.905 16:20:31 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:57.905 16:20:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:57.905 16:20:31 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:08:57.905 16:20:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:57.905 16:20:31 -- common/autotest_common.sh@10 -- # set +x 00:08:57.905 16:20:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:57.905 16:20:31 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d --hostid=35bbb10f-fc38-42ac-b909-033700c5e05d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:57.905 16:20:31 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:08:57.905 16:20:31 -- common/autotest_common.sh@1184 -- # local i=0 00:08:57.905 16:20:31 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:57.905 16:20:31 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:57.905 16:20:31 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:00.430 16:20:33 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:00.430 16:20:33 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:00.430 16:20:33 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:00.430 16:20:33 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:00.430 16:20:33 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:00.430 16:20:33 -- common/autotest_common.sh@1194 -- # return 0 00:09:00.430 16:20:33 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:00.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.430 16:20:33 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:00.430 16:20:33 -- common/autotest_common.sh@1205 -- # local i=0 00:09:00.430 16:20:33 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:00.430 16:20:33 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:00.430 16:20:33 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:00.430 16:20:33 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:00.430 16:20:33 -- common/autotest_common.sh@1217 -- # return 0 00:09:00.430 16:20:33 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:00.430 16:20:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:00.430 16:20:33 -- common/autotest_common.sh@10 -- # set +x 00:09:00.430 16:20:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:00.430 16:20:34 -- target/rpc.sh@81 -- # seq 1 5 00:09:00.430 16:20:34 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:00.430 16:20:34 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:00.430 16:20:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:00.430 16:20:34 -- common/autotest_common.sh@10 -- # set +x 00:09:00.430 16:20:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:00.430 16:20:34 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:00.430 16:20:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:00.430 16:20:34 -- common/autotest_common.sh@10 -- # set +x 00:09:00.430 [2024-04-17 16:20:34.017141] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:00.430 16:20:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:00.430 16:20:34 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:00.430 16:20:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:00.430 16:20:34 -- common/autotest_common.sh@10 -- # set +x 00:09:00.430 16:20:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:00.430 16:20:34 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:00.430 16:20:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:00.430 16:20:34 -- common/autotest_common.sh@10 -- # set +x 00:09:00.430 16:20:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:00.430 16:20:34 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d --hostid=35bbb10f-fc38-42ac-b909-033700c5e05d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:00.430 16:20:34 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:00.430 16:20:34 -- common/autotest_common.sh@1184 -- # local i=0 00:09:00.430 16:20:34 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:00.430 16:20:34 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:00.430 16:20:34 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:02.327 16:20:36 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:02.327 16:20:36 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:02.327 16:20:36 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:02.327 16:20:36 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:02.327 16:20:36 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:02.327 16:20:36 -- common/autotest_common.sh@1194 -- # return 0 00:09:02.327 16:20:36 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:02.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:02.327 16:20:36 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:02.327 16:20:36 -- common/autotest_common.sh@1205 -- # local i=0 00:09:02.327 16:20:36 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:02.327 16:20:36 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:02.327 16:20:36 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:02.327 16:20:36 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:02.327 16:20:36 -- common/autotest_common.sh@1217 -- # return 0 00:09:02.327 16:20:36 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:02.327 16:20:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.327 16:20:36 -- common/autotest_common.sh@10 -- # set +x 00:09:02.327 16:20:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.327 16:20:36 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:02.327 16:20:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.327 16:20:36 -- common/autotest_common.sh@10 -- # set +x 00:09:02.327 16:20:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.327 16:20:36 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:02.327 16:20:36 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:02.327 16:20:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.327 16:20:36 -- common/autotest_common.sh@10 -- # set +x 00:09:02.327 16:20:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.327 16:20:36 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:02.327 16:20:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.327 16:20:36 -- common/autotest_common.sh@10 -- # set +x 00:09:02.327 [2024-04-17 16:20:36.308157] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:02.327 16:20:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.327 16:20:36 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:02.327 16:20:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.327 16:20:36 -- common/autotest_common.sh@10 -- # set +x 00:09:02.327 16:20:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.327 16:20:36 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:02.327 16:20:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.327 16:20:36 -- common/autotest_common.sh@10 -- # set +x 00:09:02.327 16:20:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.327 16:20:36 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d --hostid=35bbb10f-fc38-42ac-b909-033700c5e05d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:02.585 16:20:36 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:02.585 16:20:36 -- common/autotest_common.sh@1184 -- # local i=0 00:09:02.585 16:20:36 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:02.585 16:20:36 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:02.585 16:20:36 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:04.485 16:20:38 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:04.485 16:20:38 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:04.485 16:20:38 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:04.485 16:20:38 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:04.485 16:20:38 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:04.485 16:20:38 -- common/autotest_common.sh@1194 -- # return 0 00:09:04.485 16:20:38 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:04.743 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.743 16:20:38 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:04.743 16:20:38 -- common/autotest_common.sh@1205 -- # local i=0 00:09:04.743 16:20:38 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:04.743 16:20:38 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:04.743 16:20:38 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:04.743 16:20:38 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:04.743 16:20:38 -- common/autotest_common.sh@1217 -- # return 0 00:09:04.743 16:20:38 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:04.743 16:20:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:04.743 16:20:38 -- common/autotest_common.sh@10 -- # set +x 00:09:04.743 16:20:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:04.743 16:20:38 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:04.743 16:20:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:04.743 16:20:38 -- common/autotest_common.sh@10 -- # set +x 00:09:04.743 16:20:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:04.743 16:20:38 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:04.743 16:20:38 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:04.743 16:20:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:04.743 16:20:38 -- common/autotest_common.sh@10 -- # set +x 00:09:04.743 16:20:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:04.743 16:20:38 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:04.743 16:20:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:04.743 16:20:38 -- common/autotest_common.sh@10 -- # set +x 00:09:04.743 [2024-04-17 16:20:38.595967] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:04.743 16:20:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:04.743 16:20:38 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:04.743 16:20:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:04.743 16:20:38 -- common/autotest_common.sh@10 -- # set +x 00:09:04.743 16:20:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:04.743 16:20:38 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:04.743 16:20:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:04.743 16:20:38 -- common/autotest_common.sh@10 -- # set +x 00:09:04.743 16:20:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:04.743 16:20:38 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d --hostid=35bbb10f-fc38-42ac-b909-033700c5e05d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:04.743 16:20:38 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:04.743 16:20:38 -- common/autotest_common.sh@1184 -- # local i=0 00:09:04.743 16:20:38 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:04.743 16:20:38 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:04.743 16:20:38 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:07.331 16:20:40 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:07.331 16:20:40 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:07.331 16:20:40 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:07.331 16:20:40 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:07.331 16:20:40 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:07.331 16:20:40 -- common/autotest_common.sh@1194 -- # return 0 00:09:07.331 16:20:40 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:07.331 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.331 16:20:40 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:07.331 16:20:40 -- common/autotest_common.sh@1205 -- # local i=0 00:09:07.331 16:20:40 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:07.331 16:20:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:07.331 16:20:40 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:07.331 16:20:40 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:07.331 16:20:40 -- common/autotest_common.sh@1217 -- # return 0 00:09:07.332 16:20:40 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:07.332 16:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:07.332 16:20:40 -- common/autotest_common.sh@10 -- # set +x 00:09:07.332 16:20:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:07.332 16:20:40 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:07.332 16:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:07.332 16:20:40 -- common/autotest_common.sh@10 -- # set +x 00:09:07.332 16:20:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:07.332 16:20:40 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:07.332 16:20:40 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:07.332 16:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:07.332 16:20:40 -- common/autotest_common.sh@10 -- # set +x 00:09:07.332 16:20:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:07.332 16:20:40 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:07.332 16:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:07.332 16:20:40 -- common/autotest_common.sh@10 -- # set +x 00:09:07.332 [2024-04-17 16:20:40.883255] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:07.332 16:20:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:07.332 16:20:40 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:07.332 16:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:07.332 16:20:40 -- common/autotest_common.sh@10 -- # set +x 00:09:07.332 16:20:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:07.332 16:20:40 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:07.332 16:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:07.332 16:20:40 -- common/autotest_common.sh@10 -- # set +x 00:09:07.332 16:20:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:07.332 16:20:40 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d --hostid=35bbb10f-fc38-42ac-b909-033700c5e05d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:07.332 16:20:41 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:07.332 16:20:41 -- common/autotest_common.sh@1184 -- # local i=0 00:09:07.332 16:20:41 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:07.332 16:20:41 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:07.332 16:20:41 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:09.228 16:20:43 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:09.228 16:20:43 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:09.228 16:20:43 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:09.228 16:20:43 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:09.228 16:20:43 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:09.228 16:20:43 -- common/autotest_common.sh@1194 -- # return 0 00:09:09.228 16:20:43 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:09.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.228 16:20:43 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:09.228 16:20:43 -- common/autotest_common.sh@1205 -- # local i=0 00:09:09.228 16:20:43 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:09.228 16:20:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:09.228 16:20:43 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:09.229 16:20:43 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:09.229 16:20:43 -- common/autotest_common.sh@1217 -- # return 0 00:09:09.229 16:20:43 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:09.229 16:20:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:09.229 16:20:43 -- common/autotest_common.sh@10 -- # set +x 00:09:09.229 16:20:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:09.229 16:20:43 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:09.229 16:20:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:09.229 16:20:43 -- common/autotest_common.sh@10 -- # set +x 00:09:09.229 16:20:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:09.229 16:20:43 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:09.229 16:20:43 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:09.229 16:20:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:09.229 16:20:43 -- common/autotest_common.sh@10 -- # set +x 00:09:09.229 16:20:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:09.229 16:20:43 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:09.229 16:20:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:09.229 16:20:43 -- common/autotest_common.sh@10 -- # set +x 00:09:09.229 [2024-04-17 16:20:43.174489] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:09.229 16:20:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:09.229 16:20:43 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:09.229 16:20:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:09.229 16:20:43 -- common/autotest_common.sh@10 -- # set +x 00:09:09.229 16:20:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:09.229 16:20:43 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:09.229 16:20:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:09.229 16:20:43 -- common/autotest_common.sh@10 -- # set +x 00:09:09.229 16:20:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:09.229 16:20:43 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d --hostid=35bbb10f-fc38-42ac-b909-033700c5e05d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:09.486 16:20:43 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:09.486 16:20:43 -- common/autotest_common.sh@1184 -- # local i=0 00:09:09.486 16:20:43 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:09.486 16:20:43 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:09.486 16:20:43 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:11.398 16:20:45 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:11.398 16:20:45 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:11.398 16:20:45 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:11.398 16:20:45 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:11.398 16:20:45 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:11.398 16:20:45 -- common/autotest_common.sh@1194 -- # return 0 00:09:11.398 16:20:45 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:11.398 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.398 16:20:45 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:11.398 16:20:45 -- common/autotest_common.sh@1205 -- # local i=0 00:09:11.398 16:20:45 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:11.398 16:20:45 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:11.398 16:20:45 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:11.398 16:20:45 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:11.398 16:20:45 -- common/autotest_common.sh@1217 -- # return 0 00:09:11.398 16:20:45 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:11.398 16:20:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.398 16:20:45 -- common/autotest_common.sh@10 -- # set +x 00:09:11.657 16:20:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.657 16:20:45 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:11.657 16:20:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.657 16:20:45 -- common/autotest_common.sh@10 -- # set +x 00:09:11.657 16:20:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.657 16:20:45 -- target/rpc.sh@99 -- # seq 1 5 00:09:11.657 16:20:45 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:11.657 16:20:45 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:11.657 16:20:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.657 16:20:45 -- common/autotest_common.sh@10 -- # set +x 00:09:11.657 16:20:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.657 16:20:45 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:11.657 16:20:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.657 16:20:45 -- common/autotest_common.sh@10 -- # set +x 00:09:11.657 [2024-04-17 16:20:45.465731] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:11.657 16:20:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.657 16:20:45 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:11.657 16:20:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.657 16:20:45 -- common/autotest_common.sh@10 -- # set +x 00:09:11.657 16:20:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.657 16:20:45 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:11.657 16:20:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.657 16:20:45 -- common/autotest_common.sh@10 -- # set +x 00:09:11.657 16:20:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.657 16:20:45 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.657 16:20:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.657 16:20:45 -- common/autotest_common.sh@10 -- # set +x 00:09:11.657 16:20:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.657 16:20:45 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:11.657 16:20:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.657 16:20:45 -- common/autotest_common.sh@10 -- # set +x 00:09:11.657 16:20:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.657 16:20:45 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:11.657 16:20:45 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:11.657 16:20:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.657 16:20:45 -- common/autotest_common.sh@10 -- # set +x 00:09:11.657 16:20:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.657 16:20:45 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:11.657 16:20:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.657 16:20:45 -- common/autotest_common.sh@10 -- # set +x 00:09:11.657 [2024-04-17 16:20:45.513760] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:11.657 16:20:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.657 16:20:45 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:11.657 16:20:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.657 16:20:45 -- common/autotest_common.sh@10 -- # set +x 00:09:11.657 16:20:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.657 16:20:45 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:11.657 16:20:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.657 16:20:45 -- common/autotest_common.sh@10 -- # set +x 00:09:11.657 16:20:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.657 16:20:45 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.657 16:20:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.657 16:20:45 -- common/autotest_common.sh@10 -- # set +x 00:09:11.657 16:20:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.657 16:20:45 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:11.657 16:20:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.657 16:20:45 -- common/autotest_common.sh@10 -- # set +x 00:09:11.657 16:20:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.657 16:20:45 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:11.657 16:20:45 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:11.657 16:20:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.657 16:20:45 -- common/autotest_common.sh@10 -- # set +x 00:09:11.657 16:20:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.657 16:20:45 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:11.657 16:20:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.657 16:20:45 -- common/autotest_common.sh@10 -- # set +x 00:09:11.657 [2024-04-17 16:20:45.561791] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:11.657 16:20:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.657 16:20:45 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:11.657 16:20:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.657 16:20:45 -- common/autotest_common.sh@10 -- # set +x 00:09:11.657 16:20:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.657 16:20:45 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:11.657 16:20:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.657 16:20:45 -- common/autotest_common.sh@10 -- # set +x 00:09:11.657 16:20:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.657 16:20:45 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.657 16:20:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.657 16:20:45 -- common/autotest_common.sh@10 -- # set +x 00:09:11.657 16:20:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.657 16:20:45 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:11.657 16:20:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.657 16:20:45 -- common/autotest_common.sh@10 -- # set +x 00:09:11.657 16:20:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.657 16:20:45 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:11.657 16:20:45 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:11.657 16:20:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.657 16:20:45 -- common/autotest_common.sh@10 -- # set +x 00:09:11.657 16:20:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.657 16:20:45 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:11.657 16:20:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.657 16:20:45 -- common/autotest_common.sh@10 -- # set +x 00:09:11.657 [2024-04-17 16:20:45.609889] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:11.657 16:20:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.657 16:20:45 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:11.657 16:20:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.657 16:20:45 -- common/autotest_common.sh@10 -- # set +x 00:09:11.657 16:20:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.657 16:20:45 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:11.657 16:20:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.657 16:20:45 -- common/autotest_common.sh@10 -- # set +x 00:09:11.657 16:20:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.657 16:20:45 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.657 16:20:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.657 16:20:45 -- common/autotest_common.sh@10 -- # set +x 00:09:11.657 16:20:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.657 16:20:45 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:11.657 16:20:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.657 16:20:45 -- common/autotest_common.sh@10 -- # set +x 00:09:11.657 16:20:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.657 16:20:45 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:11.657 16:20:45 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:11.658 16:20:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.658 16:20:45 -- common/autotest_common.sh@10 -- # set +x 00:09:11.658 16:20:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.658 16:20:45 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:11.658 16:20:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.658 16:20:45 -- common/autotest_common.sh@10 -- # set +x 00:09:11.658 [2024-04-17 16:20:45.657913] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:11.658 16:20:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.658 16:20:45 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:11.658 16:20:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.658 16:20:45 -- common/autotest_common.sh@10 -- # set +x 00:09:11.658 16:20:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.658 16:20:45 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:11.658 16:20:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.658 16:20:45 -- common/autotest_common.sh@10 -- # set +x 00:09:11.658 16:20:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.658 16:20:45 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.658 16:20:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.658 16:20:45 -- common/autotest_common.sh@10 -- # set +x 00:09:11.658 16:20:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.658 16:20:45 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:11.658 16:20:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.658 16:20:45 -- common/autotest_common.sh@10 -- # set +x 00:09:11.658 16:20:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.658 16:20:45 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:11.658 16:20:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.658 16:20:45 -- common/autotest_common.sh@10 -- # set +x 00:09:11.916 16:20:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.916 16:20:45 -- target/rpc.sh@110 -- # stats='{ 00:09:11.916 "poll_groups": [ 00:09:11.916 { 00:09:11.916 "admin_qpairs": 2, 00:09:11.916 "completed_nvme_io": 117, 00:09:11.916 "current_admin_qpairs": 0, 00:09:11.916 "current_io_qpairs": 0, 00:09:11.916 "io_qpairs": 16, 00:09:11.916 "name": "nvmf_tgt_poll_group_0", 00:09:11.916 "pending_bdev_io": 0, 00:09:11.916 "transports": [ 00:09:11.916 { 00:09:11.916 "trtype": "TCP" 00:09:11.916 } 00:09:11.916 ] 00:09:11.916 }, 00:09:11.916 { 00:09:11.916 "admin_qpairs": 3, 00:09:11.916 "completed_nvme_io": 165, 00:09:11.916 "current_admin_qpairs": 0, 00:09:11.916 "current_io_qpairs": 0, 00:09:11.916 "io_qpairs": 17, 00:09:11.916 "name": "nvmf_tgt_poll_group_1", 00:09:11.916 "pending_bdev_io": 0, 00:09:11.916 "transports": [ 00:09:11.916 { 00:09:11.916 "trtype": "TCP" 00:09:11.916 } 00:09:11.916 ] 00:09:11.916 }, 00:09:11.916 { 00:09:11.916 "admin_qpairs": 1, 00:09:11.916 "completed_nvme_io": 70, 00:09:11.916 "current_admin_qpairs": 0, 00:09:11.916 "current_io_qpairs": 0, 00:09:11.916 "io_qpairs": 19, 00:09:11.916 "name": "nvmf_tgt_poll_group_2", 00:09:11.916 "pending_bdev_io": 0, 00:09:11.916 "transports": [ 00:09:11.916 { 00:09:11.916 "trtype": "TCP" 00:09:11.916 } 00:09:11.916 ] 00:09:11.916 }, 00:09:11.916 { 00:09:11.916 "admin_qpairs": 1, 00:09:11.916 "completed_nvme_io": 68, 00:09:11.916 "current_admin_qpairs": 0, 00:09:11.916 "current_io_qpairs": 0, 00:09:11.916 "io_qpairs": 18, 00:09:11.916 "name": "nvmf_tgt_poll_group_3", 00:09:11.916 "pending_bdev_io": 0, 00:09:11.916 "transports": [ 00:09:11.916 { 00:09:11.916 "trtype": "TCP" 00:09:11.916 } 00:09:11.916 ] 00:09:11.916 } 00:09:11.916 ], 00:09:11.916 "tick_rate": 2200000000 00:09:11.916 }' 00:09:11.916 16:20:45 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:11.916 16:20:45 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:11.916 16:20:45 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:11.916 16:20:45 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:11.916 16:20:45 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:11.916 16:20:45 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:11.916 16:20:45 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:11.916 16:20:45 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:11.916 16:20:45 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:11.916 16:20:45 -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:09:11.916 16:20:45 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:09:11.916 16:20:45 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:11.916 16:20:45 -- target/rpc.sh@123 -- # nvmftestfini 00:09:11.916 16:20:45 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:11.916 16:20:45 -- nvmf/common.sh@117 -- # sync 00:09:11.916 16:20:45 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:11.916 16:20:45 -- nvmf/common.sh@120 -- # set +e 00:09:11.916 16:20:45 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:11.916 16:20:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:11.916 rmmod nvme_tcp 00:09:11.916 rmmod nvme_fabrics 00:09:11.916 rmmod nvme_keyring 00:09:11.916 16:20:45 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:11.916 16:20:45 -- nvmf/common.sh@124 -- # set -e 00:09:11.916 16:20:45 -- nvmf/common.sh@125 -- # return 0 00:09:11.916 16:20:45 -- nvmf/common.sh@478 -- # '[' -n 67210 ']' 00:09:11.916 16:20:45 -- nvmf/common.sh@479 -- # killprocess 67210 00:09:11.916 16:20:45 -- common/autotest_common.sh@936 -- # '[' -z 67210 ']' 00:09:11.916 16:20:45 -- common/autotest_common.sh@940 -- # kill -0 67210 00:09:11.916 16:20:45 -- common/autotest_common.sh@941 -- # uname 00:09:11.916 16:20:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:11.916 16:20:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67210 00:09:11.916 killing process with pid 67210 00:09:11.916 16:20:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:11.916 16:20:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:11.916 16:20:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67210' 00:09:11.916 16:20:45 -- common/autotest_common.sh@955 -- # kill 67210 00:09:11.916 16:20:45 -- common/autotest_common.sh@960 -- # wait 67210 00:09:12.482 16:20:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:12.482 16:20:46 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:12.482 16:20:46 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:12.482 16:20:46 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:12.482 16:20:46 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:12.482 16:20:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.482 16:20:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:12.482 16:20:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.482 16:20:46 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:12.482 00:09:12.482 real 0m18.858s 00:09:12.482 user 1m10.294s 00:09:12.482 sys 0m2.573s 00:09:12.482 16:20:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:12.482 16:20:46 -- common/autotest_common.sh@10 -- # set +x 00:09:12.482 ************************************ 00:09:12.482 END TEST nvmf_rpc 00:09:12.482 ************************************ 00:09:12.482 16:20:46 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:12.482 16:20:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:12.482 16:20:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:12.482 16:20:46 -- common/autotest_common.sh@10 -- # set +x 00:09:12.482 ************************************ 00:09:12.482 START TEST nvmf_invalid 00:09:12.482 ************************************ 00:09:12.482 16:20:46 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:12.482 * Looking for test storage... 00:09:12.482 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:12.482 16:20:46 -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:12.482 16:20:46 -- nvmf/common.sh@7 -- # uname -s 00:09:12.482 16:20:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:12.482 16:20:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:12.482 16:20:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:12.482 16:20:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:12.482 16:20:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:12.482 16:20:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:12.482 16:20:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:12.482 16:20:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:12.482 16:20:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:12.482 16:20:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:12.482 16:20:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:09:12.482 16:20:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:09:12.482 16:20:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:12.482 16:20:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:12.482 16:20:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:12.482 16:20:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:12.482 16:20:46 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:12.482 16:20:46 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:12.482 16:20:46 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:12.482 16:20:46 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:12.483 16:20:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.483 16:20:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.483 16:20:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.483 16:20:46 -- paths/export.sh@5 -- # export PATH 00:09:12.483 16:20:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.483 16:20:46 -- nvmf/common.sh@47 -- # : 0 00:09:12.483 16:20:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:12.483 16:20:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:12.483 16:20:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:12.483 16:20:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:12.483 16:20:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:12.483 16:20:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:12.483 16:20:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:12.483 16:20:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:12.483 16:20:46 -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:09:12.483 16:20:46 -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:12.483 16:20:46 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:12.483 16:20:46 -- target/invalid.sh@14 -- # target=foobar 00:09:12.483 16:20:46 -- target/invalid.sh@16 -- # RANDOM=0 00:09:12.483 16:20:46 -- target/invalid.sh@34 -- # nvmftestinit 00:09:12.483 16:20:46 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:12.483 16:20:46 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:12.483 16:20:46 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:12.483 16:20:46 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:12.483 16:20:46 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:12.483 16:20:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.483 16:20:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:12.483 16:20:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.483 16:20:46 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:09:12.483 16:20:46 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:09:12.483 16:20:46 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:09:12.483 16:20:46 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:09:12.483 16:20:46 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:09:12.483 16:20:46 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:09:12.483 16:20:46 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:12.483 16:20:46 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:12.483 16:20:46 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:12.483 16:20:46 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:12.483 16:20:46 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:12.483 16:20:46 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:12.483 16:20:46 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:12.483 16:20:46 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:12.483 16:20:46 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:12.483 16:20:46 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:12.483 16:20:46 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:12.483 16:20:46 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:12.483 16:20:46 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:12.483 16:20:46 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:12.483 Cannot find device "nvmf_tgt_br" 00:09:12.483 16:20:46 -- nvmf/common.sh@155 -- # true 00:09:12.483 16:20:46 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:12.483 Cannot find device "nvmf_tgt_br2" 00:09:12.483 16:20:46 -- nvmf/common.sh@156 -- # true 00:09:12.483 16:20:46 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:12.483 16:20:46 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:12.483 Cannot find device "nvmf_tgt_br" 00:09:12.483 16:20:46 -- nvmf/common.sh@158 -- # true 00:09:12.483 16:20:46 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:12.741 Cannot find device "nvmf_tgt_br2" 00:09:12.741 16:20:46 -- nvmf/common.sh@159 -- # true 00:09:12.741 16:20:46 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:12.741 16:20:46 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:12.741 16:20:46 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:12.741 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:12.741 16:20:46 -- nvmf/common.sh@162 -- # true 00:09:12.741 16:20:46 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:12.741 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:12.741 16:20:46 -- nvmf/common.sh@163 -- # true 00:09:12.741 16:20:46 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:12.741 16:20:46 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:12.741 16:20:46 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:12.741 16:20:46 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:12.741 16:20:46 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:12.741 16:20:46 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:12.741 16:20:46 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:12.741 16:20:46 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:12.741 16:20:46 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:12.741 16:20:46 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:12.741 16:20:46 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:12.741 16:20:46 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:12.741 16:20:46 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:12.741 16:20:46 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:12.741 16:20:46 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:12.741 16:20:46 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:12.741 16:20:46 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:12.741 16:20:46 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:12.741 16:20:46 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:12.741 16:20:46 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:12.741 16:20:46 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:12.741 16:20:46 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:12.741 16:20:46 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:12.741 16:20:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:12.741 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:12.741 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:09:12.741 00:09:12.741 --- 10.0.0.2 ping statistics --- 00:09:12.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.741 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:09:12.741 16:20:46 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:12.741 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:12.741 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:09:12.741 00:09:12.741 --- 10.0.0.3 ping statistics --- 00:09:12.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.741 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:09:12.741 16:20:46 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:12.999 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:12.999 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:09:12.999 00:09:12.999 --- 10.0.0.1 ping statistics --- 00:09:12.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.999 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:09:12.999 16:20:46 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:12.999 16:20:46 -- nvmf/common.sh@422 -- # return 0 00:09:12.999 16:20:46 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:12.999 16:20:46 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:12.999 16:20:46 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:12.999 16:20:46 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:12.999 16:20:46 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:12.999 16:20:46 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:12.999 16:20:46 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:12.999 16:20:46 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:12.999 16:20:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:12.999 16:20:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:12.999 16:20:46 -- common/autotest_common.sh@10 -- # set +x 00:09:12.999 16:20:46 -- nvmf/common.sh@470 -- # nvmfpid=67733 00:09:12.999 16:20:46 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:12.999 16:20:46 -- nvmf/common.sh@471 -- # waitforlisten 67733 00:09:12.999 16:20:46 -- common/autotest_common.sh@817 -- # '[' -z 67733 ']' 00:09:12.999 16:20:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.999 16:20:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:12.999 16:20:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.999 16:20:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:12.999 16:20:46 -- common/autotest_common.sh@10 -- # set +x 00:09:12.999 [2024-04-17 16:20:46.868976] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:09:12.999 [2024-04-17 16:20:46.869320] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.999 [2024-04-17 16:20:47.007079] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:13.261 [2024-04-17 16:20:47.152093] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:13.261 [2024-04-17 16:20:47.152166] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:13.261 [2024-04-17 16:20:47.152179] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:13.261 [2024-04-17 16:20:47.152188] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:13.261 [2024-04-17 16:20:47.152196] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:13.261 [2024-04-17 16:20:47.152332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:13.261 [2024-04-17 16:20:47.152417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:13.261 [2024-04-17 16:20:47.152971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:13.261 [2024-04-17 16:20:47.152991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.205 16:20:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:14.205 16:20:48 -- common/autotest_common.sh@850 -- # return 0 00:09:14.205 16:20:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:14.205 16:20:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:14.205 16:20:48 -- common/autotest_common.sh@10 -- # set +x 00:09:14.205 16:20:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:14.205 16:20:48 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:14.205 16:20:48 -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode18203 00:09:14.463 [2024-04-17 16:20:48.457450] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:14.463 16:20:48 -- target/invalid.sh@40 -- # out='2024/04/17 16:20:48 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode18203 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:09:14.463 request: 00:09:14.463 { 00:09:14.463 "method": "nvmf_create_subsystem", 00:09:14.463 "params": { 00:09:14.463 "nqn": "nqn.2016-06.io.spdk:cnode18203", 00:09:14.463 "tgt_name": "foobar" 00:09:14.463 } 00:09:14.463 } 00:09:14.463 Got JSON-RPC error response 00:09:14.463 GoRPCClient: error on JSON-RPC call' 00:09:14.463 16:20:48 -- target/invalid.sh@41 -- # [[ 2024/04/17 16:20:48 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode18203 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:09:14.463 request: 00:09:14.463 { 00:09:14.463 "method": "nvmf_create_subsystem", 00:09:14.463 "params": { 00:09:14.463 "nqn": "nqn.2016-06.io.spdk:cnode18203", 00:09:14.463 "tgt_name": "foobar" 00:09:14.463 } 00:09:14.463 } 00:09:14.463 Got JSON-RPC error response 00:09:14.463 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:14.463 16:20:48 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:14.463 16:20:48 -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode28497 00:09:15.030 [2024-04-17 16:20:48.887202] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28497: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:15.030 16:20:48 -- target/invalid.sh@45 -- # out='2024/04/17 16:20:48 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode28497 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:09:15.030 request: 00:09:15.030 { 00:09:15.030 "method": "nvmf_create_subsystem", 00:09:15.030 "params": { 00:09:15.030 "nqn": "nqn.2016-06.io.spdk:cnode28497", 00:09:15.030 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:09:15.030 } 00:09:15.030 } 00:09:15.030 Got JSON-RPC error response 00:09:15.030 GoRPCClient: error on JSON-RPC call' 00:09:15.030 16:20:48 -- target/invalid.sh@46 -- # [[ 2024/04/17 16:20:48 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode28497 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:09:15.030 request: 00:09:15.030 { 00:09:15.030 "method": "nvmf_create_subsystem", 00:09:15.030 "params": { 00:09:15.030 "nqn": "nqn.2016-06.io.spdk:cnode28497", 00:09:15.030 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:09:15.030 } 00:09:15.030 } 00:09:15.030 Got JSON-RPC error response 00:09:15.030 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:15.030 16:20:48 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:15.030 16:20:48 -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode776 00:09:15.596 [2024-04-17 16:20:49.339981] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode776: invalid model number 'SPDK_Controller' 00:09:15.596 16:20:49 -- target/invalid.sh@50 -- # out='2024/04/17 16:20:49 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode776], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:09:15.596 request: 00:09:15.596 { 00:09:15.596 "method": "nvmf_create_subsystem", 00:09:15.596 "params": { 00:09:15.596 "nqn": "nqn.2016-06.io.spdk:cnode776", 00:09:15.596 "model_number": "SPDK_Controller\u001f" 00:09:15.596 } 00:09:15.596 } 00:09:15.596 Got JSON-RPC error response 00:09:15.596 GoRPCClient: error on JSON-RPC call' 00:09:15.596 16:20:49 -- target/invalid.sh@51 -- # [[ 2024/04/17 16:20:49 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode776], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:09:15.596 request: 00:09:15.596 { 00:09:15.596 "method": "nvmf_create_subsystem", 00:09:15.596 "params": { 00:09:15.596 "nqn": "nqn.2016-06.io.spdk:cnode776", 00:09:15.596 "model_number": "SPDK_Controller\u001f" 00:09:15.596 } 00:09:15.596 } 00:09:15.596 Got JSON-RPC error response 00:09:15.596 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:15.596 16:20:49 -- target/invalid.sh@54 -- # gen_random_s 21 00:09:15.596 16:20:49 -- target/invalid.sh@19 -- # local length=21 ll 00:09:15.596 16:20:49 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:15.597 16:20:49 -- target/invalid.sh@21 -- # local chars 00:09:15.597 16:20:49 -- target/invalid.sh@22 -- # local string 00:09:15.597 16:20:49 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:15.597 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # printf %x 53 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x35' 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # string+=5 00:09:15.597 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.597 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # printf %x 106 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # string+=j 00:09:15.597 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.597 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # printf %x 48 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x30' 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # string+=0 00:09:15.597 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.597 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # printf %x 66 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x42' 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # string+=B 00:09:15.597 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.597 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # printf %x 53 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x35' 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # string+=5 00:09:15.597 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.597 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # printf %x 63 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # string+='?' 00:09:15.597 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.597 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # printf %x 92 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # string+='\' 00:09:15.597 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.597 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # printf %x 83 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x53' 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # string+=S 00:09:15.597 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.597 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # printf %x 108 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # string+=l 00:09:15.597 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.597 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # printf %x 61 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # string+== 00:09:15.597 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.597 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # printf %x 90 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # string+=Z 00:09:15.597 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.597 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # printf %x 72 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x48' 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # string+=H 00:09:15.597 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.597 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # printf %x 88 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x58' 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # string+=X 00:09:15.597 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.597 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # printf %x 60 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # string+='<' 00:09:15.597 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.597 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # printf %x 52 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x34' 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # string+=4 00:09:15.597 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.597 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # printf %x 45 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # string+=- 00:09:15.597 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.597 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # printf %x 58 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # string+=: 00:09:15.597 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.597 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # printf %x 99 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x63' 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # string+=c 00:09:15.597 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.597 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # printf %x 36 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x24' 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # string+='$' 00:09:15.597 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.597 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # printf %x 109 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # string+=m 00:09:15.597 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.597 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # printf %x 83 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x53' 00:09:15.597 16:20:49 -- target/invalid.sh@25 -- # string+=S 00:09:15.597 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.597 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.597 16:20:49 -- target/invalid.sh@28 -- # [[ 5 == \- ]] 00:09:15.597 16:20:49 -- target/invalid.sh@31 -- # echo '5j0B5?\Sl=ZHX<4-:c$mS' 00:09:15.597 16:20:49 -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s '5j0B5?\Sl=ZHX<4-:c$mS' nqn.2016-06.io.spdk:cnode19451 00:09:15.855 [2024-04-17 16:20:49.824827] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19451: invalid serial number '5j0B5?\Sl=ZHX<4-:c$mS' 00:09:15.855 16:20:49 -- target/invalid.sh@54 -- # out='2024/04/17 16:20:49 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode19451 serial_number:5j0B5?\Sl=ZHX<4-:c$mS], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN 5j0B5?\Sl=ZHX<4-:c$mS 00:09:15.855 request: 00:09:15.855 { 00:09:15.855 "method": "nvmf_create_subsystem", 00:09:15.855 "params": { 00:09:15.855 "nqn": "nqn.2016-06.io.spdk:cnode19451", 00:09:15.855 "serial_number": "5j0B5?\\Sl=ZHX<4-:c$mS" 00:09:15.855 } 00:09:15.855 } 00:09:15.855 Got JSON-RPC error response 00:09:15.856 GoRPCClient: error on JSON-RPC call' 00:09:15.856 16:20:49 -- target/invalid.sh@55 -- # [[ 2024/04/17 16:20:49 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode19451 serial_number:5j0B5?\Sl=ZHX<4-:c$mS], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN 5j0B5?\Sl=ZHX<4-:c$mS 00:09:15.856 request: 00:09:15.856 { 00:09:15.856 "method": "nvmf_create_subsystem", 00:09:15.856 "params": { 00:09:15.856 "nqn": "nqn.2016-06.io.spdk:cnode19451", 00:09:15.856 "serial_number": "5j0B5?\\Sl=ZHX<4-:c$mS" 00:09:15.856 } 00:09:15.856 } 00:09:15.856 Got JSON-RPC error response 00:09:15.856 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:15.856 16:20:49 -- target/invalid.sh@58 -- # gen_random_s 41 00:09:15.856 16:20:49 -- target/invalid.sh@19 -- # local length=41 ll 00:09:15.856 16:20:49 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:15.856 16:20:49 -- target/invalid.sh@21 -- # local chars 00:09:15.856 16:20:49 -- target/invalid.sh@22 -- # local string 00:09:15.856 16:20:49 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:15.856 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.856 16:20:49 -- target/invalid.sh@25 -- # printf %x 50 00:09:15.856 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x32' 00:09:15.856 16:20:49 -- target/invalid.sh@25 -- # string+=2 00:09:15.856 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.856 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.856 16:20:49 -- target/invalid.sh@25 -- # printf %x 86 00:09:15.856 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x56' 00:09:15.856 16:20:49 -- target/invalid.sh@25 -- # string+=V 00:09:15.856 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.856 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.856 16:20:49 -- target/invalid.sh@25 -- # printf %x 68 00:09:15.856 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x44' 00:09:15.856 16:20:49 -- target/invalid.sh@25 -- # string+=D 00:09:15.856 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.856 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.856 16:20:49 -- target/invalid.sh@25 -- # printf %x 58 00:09:15.856 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:09:15.856 16:20:49 -- target/invalid.sh@25 -- # string+=: 00:09:15.856 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.856 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.856 16:20:49 -- target/invalid.sh@25 -- # printf %x 56 00:09:15.856 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x38' 00:09:15.856 16:20:49 -- target/invalid.sh@25 -- # string+=8 00:09:15.856 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.856 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.856 16:20:49 -- target/invalid.sh@25 -- # printf %x 109 00:09:15.856 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:09:15.856 16:20:49 -- target/invalid.sh@25 -- # string+=m 00:09:15.856 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.856 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.856 16:20:49 -- target/invalid.sh@25 -- # printf %x 37 00:09:15.856 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x25' 00:09:15.856 16:20:49 -- target/invalid.sh@25 -- # string+=% 00:09:15.856 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.856 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.856 16:20:49 -- target/invalid.sh@25 -- # printf %x 66 00:09:15.856 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x42' 00:09:15.856 16:20:49 -- target/invalid.sh@25 -- # string+=B 00:09:15.856 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.856 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.856 16:20:49 -- target/invalid.sh@25 -- # printf %x 116 00:09:15.856 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x74' 00:09:15.856 16:20:49 -- target/invalid.sh@25 -- # string+=t 00:09:15.856 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.856 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:15.856 16:20:49 -- target/invalid.sh@25 -- # printf %x 107 00:09:15.856 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:09:15.856 16:20:49 -- target/invalid.sh@25 -- # string+=k 00:09:15.856 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:15.856 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.114 16:20:49 -- target/invalid.sh@25 -- # printf %x 118 00:09:16.114 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:16.114 16:20:49 -- target/invalid.sh@25 -- # string+=v 00:09:16.114 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.114 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.114 16:20:49 -- target/invalid.sh@25 -- # printf %x 53 00:09:16.114 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x35' 00:09:16.114 16:20:49 -- target/invalid.sh@25 -- # string+=5 00:09:16.114 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.114 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.114 16:20:49 -- target/invalid.sh@25 -- # printf %x 34 00:09:16.114 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x22' 00:09:16.114 16:20:49 -- target/invalid.sh@25 -- # string+='"' 00:09:16.114 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.114 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.114 16:20:49 -- target/invalid.sh@25 -- # printf %x 108 00:09:16.114 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:09:16.114 16:20:49 -- target/invalid.sh@25 -- # string+=l 00:09:16.114 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.114 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.114 16:20:49 -- target/invalid.sh@25 -- # printf %x 58 00:09:16.114 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:09:16.114 16:20:49 -- target/invalid.sh@25 -- # string+=: 00:09:16.114 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.114 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.114 16:20:49 -- target/invalid.sh@25 -- # printf %x 34 00:09:16.114 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x22' 00:09:16.114 16:20:49 -- target/invalid.sh@25 -- # string+='"' 00:09:16.114 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.114 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.114 16:20:49 -- target/invalid.sh@25 -- # printf %x 65 00:09:16.114 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x41' 00:09:16.114 16:20:49 -- target/invalid.sh@25 -- # string+=A 00:09:16.114 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.114 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # printf %x 34 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x22' 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # string+='"' 00:09:16.115 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.115 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # printf %x 58 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # string+=: 00:09:16.115 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.115 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # printf %x 65 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x41' 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # string+=A 00:09:16.115 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.115 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # printf %x 42 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # string+='*' 00:09:16.115 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.115 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # printf %x 33 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x21' 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # string+='!' 00:09:16.115 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.115 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # printf %x 88 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x58' 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # string+=X 00:09:16.115 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.115 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # printf %x 101 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x65' 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # string+=e 00:09:16.115 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.115 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # printf %x 38 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x26' 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # string+='&' 00:09:16.115 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.115 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # printf %x 115 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x73' 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # string+=s 00:09:16.115 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.115 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # printf %x 110 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # string+=n 00:09:16.115 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.115 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # printf %x 46 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # string+=. 00:09:16.115 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.115 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # printf %x 79 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # string+=O 00:09:16.115 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.115 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # printf %x 52 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x34' 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # string+=4 00:09:16.115 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.115 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # printf %x 123 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # string+='{' 00:09:16.115 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.115 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # printf %x 91 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # string+='[' 00:09:16.115 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.115 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # printf %x 84 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x54' 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # string+=T 00:09:16.115 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.115 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # printf %x 71 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x47' 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # string+=G 00:09:16.115 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.115 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # printf %x 64 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # echo -e '\x40' 00:09:16.115 16:20:49 -- target/invalid.sh@25 -- # string+=@ 00:09:16.115 16:20:49 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.115 16:20:49 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.115 16:20:50 -- target/invalid.sh@25 -- # printf %x 49 00:09:16.115 16:20:50 -- target/invalid.sh@25 -- # echo -e '\x31' 00:09:16.115 16:20:50 -- target/invalid.sh@25 -- # string+=1 00:09:16.115 16:20:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.115 16:20:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.115 16:20:50 -- target/invalid.sh@25 -- # printf %x 83 00:09:16.115 16:20:50 -- target/invalid.sh@25 -- # echo -e '\x53' 00:09:16.115 16:20:50 -- target/invalid.sh@25 -- # string+=S 00:09:16.115 16:20:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.115 16:20:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.115 16:20:50 -- target/invalid.sh@25 -- # printf %x 40 00:09:16.115 16:20:50 -- target/invalid.sh@25 -- # echo -e '\x28' 00:09:16.115 16:20:50 -- target/invalid.sh@25 -- # string+='(' 00:09:16.115 16:20:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.115 16:20:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.115 16:20:50 -- target/invalid.sh@25 -- # printf %x 34 00:09:16.115 16:20:50 -- target/invalid.sh@25 -- # echo -e '\x22' 00:09:16.115 16:20:50 -- target/invalid.sh@25 -- # string+='"' 00:09:16.115 16:20:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.115 16:20:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.115 16:20:50 -- target/invalid.sh@25 -- # printf %x 46 00:09:16.115 16:20:50 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:09:16.115 16:20:50 -- target/invalid.sh@25 -- # string+=. 00:09:16.115 16:20:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.115 16:20:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.115 16:20:50 -- target/invalid.sh@25 -- # printf %x 125 00:09:16.115 16:20:50 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:09:16.115 16:20:50 -- target/invalid.sh@25 -- # string+='}' 00:09:16.115 16:20:50 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:16.115 16:20:50 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:16.115 16:20:50 -- target/invalid.sh@28 -- # [[ 2 == \- ]] 00:09:16.115 16:20:50 -- target/invalid.sh@31 -- # echo '2VD:8m%Btkv5"l:"A":A*!Xe&sn.O4{[TG@1S(".}' 00:09:16.115 16:20:50 -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d '2VD:8m%Btkv5"l:"A":A*!Xe&sn.O4{[TG@1S(".}' nqn.2016-06.io.spdk:cnode21615 00:09:16.373 [2024-04-17 16:20:50.401754] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21615: invalid model number '2VD:8m%Btkv5"l:"A":A*!Xe&sn.O4{[TG@1S(".}' 00:09:16.631 16:20:50 -- target/invalid.sh@58 -- # out='2024/04/17 16:20:50 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:2VD:8m%Btkv5"l:"A":A*!Xe&sn.O4{[TG@1S(".} nqn:nqn.2016-06.io.spdk:cnode21615], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN 2VD:8m%Btkv5"l:"A":A*!Xe&sn.O4{[TG@1S(".} 00:09:16.631 request: 00:09:16.631 { 00:09:16.631 "method": "nvmf_create_subsystem", 00:09:16.631 "params": { 00:09:16.631 "nqn": "nqn.2016-06.io.spdk:cnode21615", 00:09:16.631 "model_number": "2VD:8m%Btkv5\"l:\"A\":A*!Xe&sn.O4{[TG@1S(\".}" 00:09:16.631 } 00:09:16.631 } 00:09:16.631 Got JSON-RPC error response 00:09:16.631 GoRPCClient: error on JSON-RPC call' 00:09:16.631 16:20:50 -- target/invalid.sh@59 -- # [[ 2024/04/17 16:20:50 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:2VD:8m%Btkv5"l:"A":A*!Xe&sn.O4{[TG@1S(".} nqn:nqn.2016-06.io.spdk:cnode21615], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN 2VD:8m%Btkv5"l:"A":A*!Xe&sn.O4{[TG@1S(".} 00:09:16.631 request: 00:09:16.631 { 00:09:16.631 "method": "nvmf_create_subsystem", 00:09:16.631 "params": { 00:09:16.631 "nqn": "nqn.2016-06.io.spdk:cnode21615", 00:09:16.631 "model_number": "2VD:8m%Btkv5\"l:\"A\":A*!Xe&sn.O4{[TG@1S(\".}" 00:09:16.631 } 00:09:16.631 } 00:09:16.631 Got JSON-RPC error response 00:09:16.631 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:16.631 16:20:50 -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:09:16.631 [2024-04-17 16:20:50.650170] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:16.889 16:20:50 -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:09:17.148 16:20:51 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:09:17.148 16:20:51 -- target/invalid.sh@67 -- # head -n 1 00:09:17.148 16:20:51 -- target/invalid.sh@67 -- # echo '' 00:09:17.148 16:20:51 -- target/invalid.sh@67 -- # IP= 00:09:17.148 16:20:51 -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:09:17.407 [2024-04-17 16:20:51.361051] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:09:17.407 16:20:51 -- target/invalid.sh@69 -- # out='2024/04/17 16:20:51 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:09:17.407 request: 00:09:17.407 { 00:09:17.407 "method": "nvmf_subsystem_remove_listener", 00:09:17.407 "params": { 00:09:17.407 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:17.407 "listen_address": { 00:09:17.407 "trtype": "tcp", 00:09:17.407 "traddr": "", 00:09:17.407 "trsvcid": "4421" 00:09:17.407 } 00:09:17.407 } 00:09:17.407 } 00:09:17.407 Got JSON-RPC error response 00:09:17.407 GoRPCClient: error on JSON-RPC call' 00:09:17.408 16:20:51 -- target/invalid.sh@70 -- # [[ 2024/04/17 16:20:51 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:09:17.408 request: 00:09:17.408 { 00:09:17.408 "method": "nvmf_subsystem_remove_listener", 00:09:17.408 "params": { 00:09:17.408 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:17.408 "listen_address": { 00:09:17.408 "trtype": "tcp", 00:09:17.408 "traddr": "", 00:09:17.408 "trsvcid": "4421" 00:09:17.408 } 00:09:17.408 } 00:09:17.408 } 00:09:17.408 Got JSON-RPC error response 00:09:17.408 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:09:17.408 16:20:51 -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24419 -i 0 00:09:17.974 [2024-04-17 16:20:51.785628] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24419: invalid cntlid range [0-65519] 00:09:17.974 16:20:51 -- target/invalid.sh@73 -- # out='2024/04/17 16:20:51 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode24419], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:09:17.974 request: 00:09:17.974 { 00:09:17.974 "method": "nvmf_create_subsystem", 00:09:17.974 "params": { 00:09:17.974 "nqn": "nqn.2016-06.io.spdk:cnode24419", 00:09:17.974 "min_cntlid": 0 00:09:17.974 } 00:09:17.974 } 00:09:17.974 Got JSON-RPC error response 00:09:17.974 GoRPCClient: error on JSON-RPC call' 00:09:17.974 16:20:51 -- target/invalid.sh@74 -- # [[ 2024/04/17 16:20:51 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode24419], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:09:17.974 request: 00:09:17.974 { 00:09:17.974 "method": "nvmf_create_subsystem", 00:09:17.974 "params": { 00:09:17.974 "nqn": "nqn.2016-06.io.spdk:cnode24419", 00:09:17.974 "min_cntlid": 0 00:09:17.974 } 00:09:17.974 } 00:09:17.974 Got JSON-RPC error response 00:09:17.974 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:17.974 16:20:51 -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4305 -i 65520 00:09:18.233 [2024-04-17 16:20:52.102077] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4305: invalid cntlid range [65520-65519] 00:09:18.233 16:20:52 -- target/invalid.sh@75 -- # out='2024/04/17 16:20:52 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode4305], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:09:18.233 request: 00:09:18.233 { 00:09:18.233 "method": "nvmf_create_subsystem", 00:09:18.233 "params": { 00:09:18.233 "nqn": "nqn.2016-06.io.spdk:cnode4305", 00:09:18.233 "min_cntlid": 65520 00:09:18.233 } 00:09:18.233 } 00:09:18.233 Got JSON-RPC error response 00:09:18.233 GoRPCClient: error on JSON-RPC call' 00:09:18.233 16:20:52 -- target/invalid.sh@76 -- # [[ 2024/04/17 16:20:52 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode4305], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:09:18.233 request: 00:09:18.233 { 00:09:18.233 "method": "nvmf_create_subsystem", 00:09:18.233 "params": { 00:09:18.233 "nqn": "nqn.2016-06.io.spdk:cnode4305", 00:09:18.233 "min_cntlid": 65520 00:09:18.233 } 00:09:18.233 } 00:09:18.233 Got JSON-RPC error response 00:09:18.233 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:18.233 16:20:52 -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31377 -I 0 00:09:18.492 [2024-04-17 16:20:52.386490] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31377: invalid cntlid range [1-0] 00:09:18.492 16:20:52 -- target/invalid.sh@77 -- # out='2024/04/17 16:20:52 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode31377], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:09:18.492 request: 00:09:18.492 { 00:09:18.492 "method": "nvmf_create_subsystem", 00:09:18.492 "params": { 00:09:18.492 "nqn": "nqn.2016-06.io.spdk:cnode31377", 00:09:18.492 "max_cntlid": 0 00:09:18.492 } 00:09:18.492 } 00:09:18.492 Got JSON-RPC error response 00:09:18.492 GoRPCClient: error on JSON-RPC call' 00:09:18.492 16:20:52 -- target/invalid.sh@78 -- # [[ 2024/04/17 16:20:52 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode31377], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:09:18.492 request: 00:09:18.492 { 00:09:18.492 "method": "nvmf_create_subsystem", 00:09:18.492 "params": { 00:09:18.492 "nqn": "nqn.2016-06.io.spdk:cnode31377", 00:09:18.492 "max_cntlid": 0 00:09:18.492 } 00:09:18.492 } 00:09:18.492 Got JSON-RPC error response 00:09:18.492 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:18.492 16:20:52 -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17079 -I 65520 00:09:18.750 [2024-04-17 16:20:52.779134] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17079: invalid cntlid range [1-65520] 00:09:19.008 16:20:52 -- target/invalid.sh@79 -- # out='2024/04/17 16:20:52 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode17079], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:09:19.008 request: 00:09:19.008 { 00:09:19.008 "method": "nvmf_create_subsystem", 00:09:19.008 "params": { 00:09:19.008 "nqn": "nqn.2016-06.io.spdk:cnode17079", 00:09:19.008 "max_cntlid": 65520 00:09:19.008 } 00:09:19.008 } 00:09:19.008 Got JSON-RPC error response 00:09:19.008 GoRPCClient: error on JSON-RPC call' 00:09:19.008 16:20:52 -- target/invalid.sh@80 -- # [[ 2024/04/17 16:20:52 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode17079], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:09:19.008 request: 00:09:19.008 { 00:09:19.008 "method": "nvmf_create_subsystem", 00:09:19.008 "params": { 00:09:19.008 "nqn": "nqn.2016-06.io.spdk:cnode17079", 00:09:19.008 "max_cntlid": 65520 00:09:19.008 } 00:09:19.008 } 00:09:19.008 Got JSON-RPC error response 00:09:19.008 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:19.008 16:20:52 -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16468 -i 6 -I 5 00:09:19.008 [2024-04-17 16:20:53.027455] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16468: invalid cntlid range [6-5] 00:09:19.008 16:20:53 -- target/invalid.sh@83 -- # out='2024/04/17 16:20:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode16468], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:09:19.008 request: 00:09:19.008 { 00:09:19.008 "method": "nvmf_create_subsystem", 00:09:19.008 "params": { 00:09:19.008 "nqn": "nqn.2016-06.io.spdk:cnode16468", 00:09:19.008 "min_cntlid": 6, 00:09:19.008 "max_cntlid": 5 00:09:19.008 } 00:09:19.008 } 00:09:19.008 Got JSON-RPC error response 00:09:19.008 GoRPCClient: error on JSON-RPC call' 00:09:19.008 16:20:53 -- target/invalid.sh@84 -- # [[ 2024/04/17 16:20:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode16468], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:09:19.008 request: 00:09:19.008 { 00:09:19.008 "method": "nvmf_create_subsystem", 00:09:19.008 "params": { 00:09:19.008 "nqn": "nqn.2016-06.io.spdk:cnode16468", 00:09:19.008 "min_cntlid": 6, 00:09:19.008 "max_cntlid": 5 00:09:19.008 } 00:09:19.008 } 00:09:19.008 Got JSON-RPC error response 00:09:19.008 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:19.008 16:20:53 -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:09:19.267 16:20:53 -- target/invalid.sh@87 -- # out='request: 00:09:19.267 { 00:09:19.267 "name": "foobar", 00:09:19.267 "method": "nvmf_delete_target", 00:09:19.267 "req_id": 1 00:09:19.267 } 00:09:19.267 Got JSON-RPC error response 00:09:19.267 response: 00:09:19.267 { 00:09:19.267 "code": -32602, 00:09:19.267 "message": "The specified target doesn'\''t exist, cannot delete it." 00:09:19.267 }' 00:09:19.267 16:20:53 -- target/invalid.sh@88 -- # [[ request: 00:09:19.267 { 00:09:19.267 "name": "foobar", 00:09:19.267 "method": "nvmf_delete_target", 00:09:19.267 "req_id": 1 00:09:19.267 } 00:09:19.267 Got JSON-RPC error response 00:09:19.267 response: 00:09:19.267 { 00:09:19.267 "code": -32602, 00:09:19.267 "message": "The specified target doesn't exist, cannot delete it." 00:09:19.267 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:09:19.267 16:20:53 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:09:19.267 16:20:53 -- target/invalid.sh@91 -- # nvmftestfini 00:09:19.267 16:20:53 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:19.267 16:20:53 -- nvmf/common.sh@117 -- # sync 00:09:19.267 16:20:53 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:19.267 16:20:53 -- nvmf/common.sh@120 -- # set +e 00:09:19.267 16:20:53 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:19.267 16:20:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:19.267 rmmod nvme_tcp 00:09:19.267 rmmod nvme_fabrics 00:09:19.267 rmmod nvme_keyring 00:09:19.267 16:20:53 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:19.267 16:20:53 -- nvmf/common.sh@124 -- # set -e 00:09:19.267 16:20:53 -- nvmf/common.sh@125 -- # return 0 00:09:19.267 16:20:53 -- nvmf/common.sh@478 -- # '[' -n 67733 ']' 00:09:19.267 16:20:53 -- nvmf/common.sh@479 -- # killprocess 67733 00:09:19.267 16:20:53 -- common/autotest_common.sh@936 -- # '[' -z 67733 ']' 00:09:19.267 16:20:53 -- common/autotest_common.sh@940 -- # kill -0 67733 00:09:19.267 16:20:53 -- common/autotest_common.sh@941 -- # uname 00:09:19.267 16:20:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:19.267 16:20:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67733 00:09:19.267 16:20:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:19.267 killing process with pid 67733 00:09:19.267 16:20:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:19.267 16:20:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67733' 00:09:19.267 16:20:53 -- common/autotest_common.sh@955 -- # kill 67733 00:09:19.267 16:20:53 -- common/autotest_common.sh@960 -- # wait 67733 00:09:19.832 16:20:53 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:19.832 16:20:53 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:19.832 16:20:53 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:19.832 16:20:53 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:19.832 16:20:53 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:19.832 16:20:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.832 16:20:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:19.832 16:20:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.832 16:20:53 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:19.832 ************************************ 00:09:19.832 END TEST nvmf_invalid 00:09:19.832 ************************************ 00:09:19.832 00:09:19.832 real 0m7.388s 00:09:19.832 user 0m30.242s 00:09:19.832 sys 0m1.491s 00:09:19.832 16:20:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:19.832 16:20:53 -- common/autotest_common.sh@10 -- # set +x 00:09:19.832 16:20:53 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:19.832 16:20:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:19.832 16:20:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:19.832 16:20:53 -- common/autotest_common.sh@10 -- # set +x 00:09:20.091 ************************************ 00:09:20.091 START TEST nvmf_abort 00:09:20.091 ************************************ 00:09:20.091 16:20:53 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:20.091 * Looking for test storage... 00:09:20.091 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:20.091 16:20:53 -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:20.091 16:20:53 -- nvmf/common.sh@7 -- # uname -s 00:09:20.091 16:20:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:20.091 16:20:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:20.091 16:20:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:20.091 16:20:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:20.091 16:20:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:20.091 16:20:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:20.091 16:20:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:20.091 16:20:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:20.091 16:20:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:20.091 16:20:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:20.091 16:20:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:09:20.091 16:20:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:09:20.091 16:20:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:20.091 16:20:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:20.091 16:20:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:20.091 16:20:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:20.091 16:20:53 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:20.091 16:20:53 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:20.091 16:20:53 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:20.091 16:20:53 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:20.091 16:20:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.091 16:20:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.091 16:20:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.091 16:20:53 -- paths/export.sh@5 -- # export PATH 00:09:20.091 16:20:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.091 16:20:53 -- nvmf/common.sh@47 -- # : 0 00:09:20.091 16:20:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:20.091 16:20:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:20.091 16:20:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:20.091 16:20:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:20.092 16:20:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:20.092 16:20:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:20.092 16:20:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:20.092 16:20:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:20.092 16:20:53 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:20.092 16:20:53 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:20.092 16:20:53 -- target/abort.sh@14 -- # nvmftestinit 00:09:20.092 16:20:53 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:20.092 16:20:53 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:20.092 16:20:54 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:20.092 16:20:54 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:20.092 16:20:54 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:20.092 16:20:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.092 16:20:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:20.092 16:20:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.092 16:20:54 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:09:20.092 16:20:54 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:09:20.092 16:20:54 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:09:20.092 16:20:54 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:09:20.092 16:20:54 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:09:20.092 16:20:54 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:09:20.092 16:20:54 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:20.092 16:20:54 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:20.092 16:20:54 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:20.092 16:20:54 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:20.092 16:20:54 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:20.092 16:20:54 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:20.092 16:20:54 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:20.092 16:20:54 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:20.092 16:20:54 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:20.092 16:20:54 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:20.092 16:20:54 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:20.092 16:20:54 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:20.092 16:20:54 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:20.092 16:20:54 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:20.092 Cannot find device "nvmf_tgt_br" 00:09:20.092 16:20:54 -- nvmf/common.sh@155 -- # true 00:09:20.092 16:20:54 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:20.092 Cannot find device "nvmf_tgt_br2" 00:09:20.092 16:20:54 -- nvmf/common.sh@156 -- # true 00:09:20.092 16:20:54 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:20.092 16:20:54 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:20.092 Cannot find device "nvmf_tgt_br" 00:09:20.092 16:20:54 -- nvmf/common.sh@158 -- # true 00:09:20.092 16:20:54 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:20.092 Cannot find device "nvmf_tgt_br2" 00:09:20.092 16:20:54 -- nvmf/common.sh@159 -- # true 00:09:20.092 16:20:54 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:20.092 16:20:54 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:20.350 16:20:54 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:20.350 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:20.350 16:20:54 -- nvmf/common.sh@162 -- # true 00:09:20.351 16:20:54 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:20.351 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:20.351 16:20:54 -- nvmf/common.sh@163 -- # true 00:09:20.351 16:20:54 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:20.351 16:20:54 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:20.351 16:20:54 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:20.351 16:20:54 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:20.351 16:20:54 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:20.351 16:20:54 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:20.351 16:20:54 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:20.351 16:20:54 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:20.351 16:20:54 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:20.351 16:20:54 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:20.351 16:20:54 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:20.351 16:20:54 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:20.351 16:20:54 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:20.351 16:20:54 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:20.351 16:20:54 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:20.351 16:20:54 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:20.351 16:20:54 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:20.351 16:20:54 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:20.351 16:20:54 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:20.351 16:20:54 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:20.351 16:20:54 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:20.351 16:20:54 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:20.351 16:20:54 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:20.351 16:20:54 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:20.351 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:20.351 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:09:20.351 00:09:20.351 --- 10.0.0.2 ping statistics --- 00:09:20.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.351 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:09:20.351 16:20:54 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:20.351 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:20.351 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:09:20.351 00:09:20.351 --- 10.0.0.3 ping statistics --- 00:09:20.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.351 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:09:20.351 16:20:54 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:20.351 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:20.351 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:09:20.351 00:09:20.351 --- 10.0.0.1 ping statistics --- 00:09:20.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.351 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:09:20.351 16:20:54 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:20.351 16:20:54 -- nvmf/common.sh@422 -- # return 0 00:09:20.351 16:20:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:20.351 16:20:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:20.351 16:20:54 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:20.351 16:20:54 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:20.351 16:20:54 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:20.351 16:20:54 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:20.351 16:20:54 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:20.351 16:20:54 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:20.351 16:20:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:20.351 16:20:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:20.351 16:20:54 -- common/autotest_common.sh@10 -- # set +x 00:09:20.351 16:20:54 -- nvmf/common.sh@470 -- # nvmfpid=68259 00:09:20.351 16:20:54 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:20.351 16:20:54 -- nvmf/common.sh@471 -- # waitforlisten 68259 00:09:20.351 16:20:54 -- common/autotest_common.sh@817 -- # '[' -z 68259 ']' 00:09:20.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.351 16:20:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.351 16:20:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:20.351 16:20:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.351 16:20:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:20.351 16:20:54 -- common/autotest_common.sh@10 -- # set +x 00:09:20.609 [2024-04-17 16:20:54.466163] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:09:20.609 [2024-04-17 16:20:54.466302] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:20.609 [2024-04-17 16:20:54.608132] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:20.868 [2024-04-17 16:20:54.768480] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:20.868 [2024-04-17 16:20:54.768578] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:20.868 [2024-04-17 16:20:54.768597] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:20.868 [2024-04-17 16:20:54.768609] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:20.868 [2024-04-17 16:20:54.768621] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:20.868 [2024-04-17 16:20:54.768756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:20.868 [2024-04-17 16:20:54.768908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.868 [2024-04-17 16:20:54.769413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:21.434 16:20:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:21.434 16:20:55 -- common/autotest_common.sh@850 -- # return 0 00:09:21.434 16:20:55 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:21.434 16:20:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:21.434 16:20:55 -- common/autotest_common.sh@10 -- # set +x 00:09:21.693 16:20:55 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:21.693 16:20:55 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:21.693 16:20:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:21.693 16:20:55 -- common/autotest_common.sh@10 -- # set +x 00:09:21.693 [2024-04-17 16:20:55.515502] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:21.693 16:20:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:21.693 16:20:55 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:21.693 16:20:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:21.693 16:20:55 -- common/autotest_common.sh@10 -- # set +x 00:09:21.693 Malloc0 00:09:21.693 16:20:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:21.693 16:20:55 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:21.693 16:20:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:21.693 16:20:55 -- common/autotest_common.sh@10 -- # set +x 00:09:21.693 Delay0 00:09:21.693 16:20:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:21.693 16:20:55 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:21.693 16:20:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:21.693 16:20:55 -- common/autotest_common.sh@10 -- # set +x 00:09:21.693 16:20:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:21.693 16:20:55 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:21.693 16:20:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:21.693 16:20:55 -- common/autotest_common.sh@10 -- # set +x 00:09:21.693 16:20:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:21.693 16:20:55 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:21.693 16:20:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:21.693 16:20:55 -- common/autotest_common.sh@10 -- # set +x 00:09:21.693 [2024-04-17 16:20:55.596444] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:21.693 16:20:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:21.693 16:20:55 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:21.693 16:20:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:21.693 16:20:55 -- common/autotest_common.sh@10 -- # set +x 00:09:21.693 16:20:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:21.693 16:20:55 -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:21.951 [2024-04-17 16:20:55.784532] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:23.851 Initializing NVMe Controllers 00:09:23.851 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:23.851 controller IO queue size 128 less than required 00:09:23.851 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:23.851 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:23.851 Initialization complete. Launching workers. 00:09:23.851 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 26024 00:09:23.851 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 26085, failed to submit 62 00:09:23.851 success 26028, unsuccess 57, failed 0 00:09:23.851 16:20:57 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:23.851 16:20:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.851 16:20:57 -- common/autotest_common.sh@10 -- # set +x 00:09:23.851 16:20:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.851 16:20:57 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:23.851 16:20:57 -- target/abort.sh@38 -- # nvmftestfini 00:09:23.851 16:20:57 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:23.851 16:20:57 -- nvmf/common.sh@117 -- # sync 00:09:23.851 16:20:57 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:23.851 16:20:57 -- nvmf/common.sh@120 -- # set +e 00:09:23.851 16:20:57 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:23.851 16:20:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:23.851 rmmod nvme_tcp 00:09:23.851 rmmod nvme_fabrics 00:09:24.110 rmmod nvme_keyring 00:09:24.110 16:20:57 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:24.110 16:20:57 -- nvmf/common.sh@124 -- # set -e 00:09:24.110 16:20:57 -- nvmf/common.sh@125 -- # return 0 00:09:24.110 16:20:57 -- nvmf/common.sh@478 -- # '[' -n 68259 ']' 00:09:24.110 16:20:57 -- nvmf/common.sh@479 -- # killprocess 68259 00:09:24.110 16:20:57 -- common/autotest_common.sh@936 -- # '[' -z 68259 ']' 00:09:24.110 16:20:57 -- common/autotest_common.sh@940 -- # kill -0 68259 00:09:24.110 16:20:57 -- common/autotest_common.sh@941 -- # uname 00:09:24.110 16:20:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:24.110 16:20:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68259 00:09:24.110 16:20:57 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:24.110 16:20:57 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:24.110 killing process with pid 68259 00:09:24.110 16:20:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68259' 00:09:24.110 16:20:57 -- common/autotest_common.sh@955 -- # kill 68259 00:09:24.110 16:20:57 -- common/autotest_common.sh@960 -- # wait 68259 00:09:24.368 16:20:58 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:24.368 16:20:58 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:24.368 16:20:58 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:24.368 16:20:58 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:24.368 16:20:58 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:24.368 16:20:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.368 16:20:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:24.368 16:20:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.368 16:20:58 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:24.368 00:09:24.368 real 0m4.410s 00:09:24.368 user 0m12.249s 00:09:24.368 sys 0m1.124s 00:09:24.368 16:20:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:24.368 ************************************ 00:09:24.368 END TEST nvmf_abort 00:09:24.368 ************************************ 00:09:24.368 16:20:58 -- common/autotest_common.sh@10 -- # set +x 00:09:24.368 16:20:58 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:24.368 16:20:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:24.368 16:20:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:24.368 16:20:58 -- common/autotest_common.sh@10 -- # set +x 00:09:24.626 ************************************ 00:09:24.626 START TEST nvmf_ns_hotplug_stress 00:09:24.626 ************************************ 00:09:24.626 16:20:58 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:24.626 * Looking for test storage... 00:09:24.626 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:24.626 16:20:58 -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:24.626 16:20:58 -- nvmf/common.sh@7 -- # uname -s 00:09:24.626 16:20:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:24.626 16:20:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:24.626 16:20:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:24.626 16:20:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:24.626 16:20:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:24.626 16:20:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:24.626 16:20:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:24.626 16:20:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:24.626 16:20:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:24.626 16:20:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:24.626 16:20:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:09:24.626 16:20:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:09:24.626 16:20:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:24.626 16:20:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:24.626 16:20:58 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:24.626 16:20:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:24.626 16:20:58 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:24.626 16:20:58 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.626 16:20:58 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.626 16:20:58 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.626 16:20:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.627 16:20:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.627 16:20:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.627 16:20:58 -- paths/export.sh@5 -- # export PATH 00:09:24.627 16:20:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.627 16:20:58 -- nvmf/common.sh@47 -- # : 0 00:09:24.627 16:20:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:24.627 16:20:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:24.627 16:20:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:24.627 16:20:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:24.627 16:20:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:24.627 16:20:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:24.627 16:20:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:24.627 16:20:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:24.627 16:20:58 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:24.627 16:20:58 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:09:24.627 16:20:58 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:24.627 16:20:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:24.627 16:20:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:24.627 16:20:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:24.627 16:20:58 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:24.627 16:20:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.627 16:20:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:24.627 16:20:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.627 16:20:58 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:09:24.627 16:20:58 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:09:24.627 16:20:58 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:09:24.627 16:20:58 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:09:24.627 16:20:58 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:09:24.627 16:20:58 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:09:24.627 16:20:58 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:24.627 16:20:58 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:24.627 16:20:58 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:24.627 16:20:58 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:24.627 16:20:58 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:24.627 16:20:58 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:24.627 16:20:58 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:24.627 16:20:58 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:24.627 16:20:58 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:24.627 16:20:58 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:24.627 16:20:58 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:24.627 16:20:58 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:24.627 16:20:58 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:24.627 16:20:58 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:24.627 Cannot find device "nvmf_tgt_br" 00:09:24.627 16:20:58 -- nvmf/common.sh@155 -- # true 00:09:24.627 16:20:58 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:24.627 Cannot find device "nvmf_tgt_br2" 00:09:24.627 16:20:58 -- nvmf/common.sh@156 -- # true 00:09:24.627 16:20:58 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:24.627 16:20:58 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:24.627 Cannot find device "nvmf_tgt_br" 00:09:24.627 16:20:58 -- nvmf/common.sh@158 -- # true 00:09:24.627 16:20:58 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:24.627 Cannot find device "nvmf_tgt_br2" 00:09:24.627 16:20:58 -- nvmf/common.sh@159 -- # true 00:09:24.627 16:20:58 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:24.627 16:20:58 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:24.627 16:20:58 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:24.627 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:24.627 16:20:58 -- nvmf/common.sh@162 -- # true 00:09:24.627 16:20:58 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:24.627 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:24.627 16:20:58 -- nvmf/common.sh@163 -- # true 00:09:24.627 16:20:58 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:24.627 16:20:58 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:24.885 16:20:58 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:24.885 16:20:58 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:24.885 16:20:58 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:24.885 16:20:58 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:24.885 16:20:58 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:24.885 16:20:58 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:24.885 16:20:58 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:24.885 16:20:58 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:24.885 16:20:58 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:24.885 16:20:58 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:24.885 16:20:58 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:24.885 16:20:58 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:24.885 16:20:58 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:24.885 16:20:58 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:24.885 16:20:58 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:24.885 16:20:58 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:24.885 16:20:58 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:24.885 16:20:58 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:24.885 16:20:58 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:24.885 16:20:58 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:24.885 16:20:58 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:24.885 16:20:58 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:24.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:24.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:09:24.885 00:09:24.885 --- 10.0.0.2 ping statistics --- 00:09:24.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.885 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:09:24.885 16:20:58 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:24.885 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:24.885 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:09:24.885 00:09:24.885 --- 10.0.0.3 ping statistics --- 00:09:24.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.885 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:09:24.885 16:20:58 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:24.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:24.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:09:24.885 00:09:24.885 --- 10.0.0.1 ping statistics --- 00:09:24.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.885 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:09:24.885 16:20:58 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:24.885 16:20:58 -- nvmf/common.sh@422 -- # return 0 00:09:24.885 16:20:58 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:24.885 16:20:58 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:24.885 16:20:58 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:24.885 16:20:58 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:24.885 16:20:58 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:24.885 16:20:58 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:24.885 16:20:58 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:24.885 16:20:58 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:09:24.885 16:20:58 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:24.885 16:20:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:24.885 16:20:58 -- common/autotest_common.sh@10 -- # set +x 00:09:24.885 16:20:58 -- nvmf/common.sh@470 -- # nvmfpid=68535 00:09:24.885 16:20:58 -- nvmf/common.sh@471 -- # waitforlisten 68535 00:09:24.885 16:20:58 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:24.885 16:20:58 -- common/autotest_common.sh@817 -- # '[' -z 68535 ']' 00:09:24.885 16:20:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.885 16:20:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:24.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.885 16:20:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.886 16:20:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:24.886 16:20:58 -- common/autotest_common.sh@10 -- # set +x 00:09:25.144 [2024-04-17 16:20:58.936543] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:09:25.144 [2024-04-17 16:20:58.936638] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:25.144 [2024-04-17 16:20:59.074433] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:25.402 [2024-04-17 16:20:59.208727] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:25.402 [2024-04-17 16:20:59.208801] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:25.402 [2024-04-17 16:20:59.208815] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:25.402 [2024-04-17 16:20:59.208824] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:25.402 [2024-04-17 16:20:59.208831] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:25.402 [2024-04-17 16:20:59.209009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:25.402 [2024-04-17 16:20:59.209075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:25.402 [2024-04-17 16:20:59.209087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:25.968 16:20:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:25.968 16:20:59 -- common/autotest_common.sh@850 -- # return 0 00:09:25.968 16:20:59 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:25.968 16:20:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:25.968 16:20:59 -- common/autotest_common.sh@10 -- # set +x 00:09:25.968 16:20:59 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:25.968 16:20:59 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:09:25.968 16:20:59 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:26.231 [2024-04-17 16:21:00.263759] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:26.490 16:21:00 -- target/ns_hotplug_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:26.748 16:21:00 -- target/ns_hotplug_stress.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:27.007 [2024-04-17 16:21:00.874504] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:27.007 16:21:00 -- target/ns_hotplug_stress.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:27.264 16:21:01 -- target/ns_hotplug_stress.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:27.523 Malloc0 00:09:27.523 16:21:01 -- target/ns_hotplug_stress.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:27.781 Delay0 00:09:27.781 16:21:01 -- target/ns_hotplug_stress.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:28.347 16:21:02 -- target/ns_hotplug_stress.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:28.605 NULL1 00:09:28.605 16:21:02 -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:28.605 16:21:02 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=68675 00:09:28.605 16:21:02 -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:28.605 16:21:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68675 00:09:28.605 16:21:02 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.032 Read completed with error (sct=0, sc=11) 00:09:30.032 16:21:03 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:30.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.290 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.290 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.290 16:21:04 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:09:30.290 16:21:04 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:30.548 true 00:09:30.548 16:21:04 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68675 00:09:30.548 16:21:04 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:31.482 16:21:05 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:31.741 16:21:05 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:09:31.741 16:21:05 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:31.999 true 00:09:31.999 16:21:05 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68675 00:09:31.999 16:21:05 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:32.256 16:21:06 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:32.514 16:21:06 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:09:32.514 16:21:06 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:32.773 true 00:09:32.774 16:21:06 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68675 00:09:32.774 16:21:06 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:33.032 16:21:06 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:33.290 16:21:07 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:09:33.290 16:21:07 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:33.548 true 00:09:33.548 16:21:07 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68675 00:09:33.548 16:21:07 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:33.806 16:21:07 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:34.063 16:21:08 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:09:34.063 16:21:08 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:34.320 true 00:09:34.320 16:21:08 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68675 00:09:34.320 16:21:08 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.254 16:21:09 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:35.511 16:21:09 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:09:35.511 16:21:09 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:35.809 true 00:09:36.066 16:21:09 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68675 00:09:36.066 16:21:09 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.323 16:21:10 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:36.580 16:21:10 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:09:36.580 16:21:10 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:36.838 true 00:09:36.838 16:21:10 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68675 00:09:36.838 16:21:10 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.095 16:21:11 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:37.354 16:21:11 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:09:37.354 16:21:11 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:37.612 true 00:09:37.612 16:21:11 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68675 00:09:37.612 16:21:11 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.869 16:21:11 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:38.127 16:21:12 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:09:38.127 16:21:12 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:38.693 true 00:09:38.693 16:21:12 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68675 00:09:38.693 16:21:12 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.258 16:21:13 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:39.826 16:21:13 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:09:39.826 16:21:13 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:40.084 true 00:09:40.084 16:21:14 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68675 00:09:40.084 16:21:14 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:41.456 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:41.456 16:21:15 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:41.456 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:41.456 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:41.456 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:41.456 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:41.456 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:41.456 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:41.714 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:41.714 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:41.714 16:21:15 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:09:41.714 16:21:15 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:41.972 true 00:09:41.972 16:21:15 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68675 00:09:41.972 16:21:15 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:42.906 16:21:16 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:42.906 16:21:16 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:09:42.906 16:21:16 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:43.164 true 00:09:43.164 16:21:17 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68675 00:09:43.164 16:21:17 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.422 16:21:17 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:43.680 16:21:17 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:09:43.680 16:21:17 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:43.938 true 00:09:43.938 16:21:17 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68675 00:09:43.938 16:21:17 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.871 16:21:18 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:45.129 16:21:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:09:45.129 16:21:19 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:45.387 true 00:09:45.387 16:21:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68675 00:09:45.387 16:21:19 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.762 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:46.762 16:21:20 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:46.762 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:46.762 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:46.762 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:47.021 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:47.021 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:47.021 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:47.021 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:47.021 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:47.021 16:21:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:09:47.021 16:21:21 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:47.279 true 00:09:47.279 16:21:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68675 00:09:47.279 16:21:21 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.214 16:21:22 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:48.472 16:21:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:09:48.472 16:21:22 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:48.730 true 00:09:48.730 16:21:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68675 00:09:48.730 16:21:22 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.989 16:21:22 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:49.246 16:21:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:09:49.246 16:21:23 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:49.504 true 00:09:49.504 16:21:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68675 00:09:49.504 16:21:23 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.070 16:21:24 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:50.637 16:21:24 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:09:50.637 16:21:24 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:50.895 true 00:09:50.895 16:21:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68675 00:09:50.895 16:21:24 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.269 16:21:26 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:52.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.526 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.526 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.526 16:21:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:09:52.526 16:21:26 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:52.783 true 00:09:53.042 16:21:26 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68675 00:09:53.042 16:21:26 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.608 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.608 16:21:27 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:53.608 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.608 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.608 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.608 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.865 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.865 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.865 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.865 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.865 16:21:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:09:53.865 16:21:27 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:54.123 true 00:09:54.381 16:21:28 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68675 00:09:54.381 16:21:28 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:54.947 16:21:28 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:54.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:55.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:55.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:55.279 16:21:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:09:55.279 16:21:29 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:55.537 true 00:09:55.537 16:21:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68675 00:09:55.537 16:21:29 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:55.796 16:21:29 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:56.055 16:21:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:09:56.055 16:21:30 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:56.313 true 00:09:56.313 16:21:30 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68675 00:09:56.313 16:21:30 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.247 16:21:31 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:57.505 16:21:31 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:09:57.505 16:21:31 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:57.763 true 00:09:57.763 16:21:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68675 00:09:57.763 16:21:31 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:58.021 16:21:31 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:58.280 16:21:32 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:09:58.280 16:21:32 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:58.280 true 00:09:58.538 16:21:32 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68675 00:09:58.538 16:21:32 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:58.538 16:21:32 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:58.797 16:21:32 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:09:58.797 16:21:32 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:59.056 Initializing NVMe Controllers 00:09:59.056 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:59.056 Controller IO queue size 128, less than required. 00:09:59.056 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:59.056 Controller IO queue size 128, less than required. 00:09:59.056 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:59.056 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:59.056 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:59.056 Initialization complete. Launching workers. 00:09:59.056 ======================================================== 00:09:59.056 Latency(us) 00:09:59.056 Device Information : IOPS MiB/s Average min max 00:09:59.056 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1573.64 0.77 45522.44 2959.45 1172931.96 00:09:59.056 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10388.54 5.07 12321.23 3503.84 707617.84 00:09:59.056 ======================================================== 00:09:59.056 Total : 11962.18 5.84 16688.89 2959.45 1172931.96 00:09:59.056 00:09:59.324 true 00:09:59.324 16:21:33 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68675 00:09:59.324 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (68675) - No such process 00:09:59.324 16:21:33 -- target/ns_hotplug_stress.sh@44 -- # wait 68675 00:09:59.324 16:21:33 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:09:59.324 16:21:33 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:09:59.324 16:21:33 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:59.324 16:21:33 -- nvmf/common.sh@117 -- # sync 00:09:59.324 16:21:33 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:59.324 16:21:33 -- nvmf/common.sh@120 -- # set +e 00:09:59.324 16:21:33 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:59.324 16:21:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:59.324 rmmod nvme_tcp 00:09:59.324 rmmod nvme_fabrics 00:09:59.324 rmmod nvme_keyring 00:09:59.324 16:21:33 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:59.324 16:21:33 -- nvmf/common.sh@124 -- # set -e 00:09:59.324 16:21:33 -- nvmf/common.sh@125 -- # return 0 00:09:59.324 16:21:33 -- nvmf/common.sh@478 -- # '[' -n 68535 ']' 00:09:59.324 16:21:33 -- nvmf/common.sh@479 -- # killprocess 68535 00:09:59.324 16:21:33 -- common/autotest_common.sh@936 -- # '[' -z 68535 ']' 00:09:59.324 16:21:33 -- common/autotest_common.sh@940 -- # kill -0 68535 00:09:59.324 16:21:33 -- common/autotest_common.sh@941 -- # uname 00:09:59.324 16:21:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:59.324 16:21:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68535 00:09:59.324 killing process with pid 68535 00:09:59.324 16:21:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:59.324 16:21:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:59.324 16:21:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68535' 00:09:59.324 16:21:33 -- common/autotest_common.sh@955 -- # kill 68535 00:09:59.324 16:21:33 -- common/autotest_common.sh@960 -- # wait 68535 00:09:59.613 16:21:33 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:59.613 16:21:33 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:59.613 16:21:33 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:59.613 16:21:33 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:59.613 16:21:33 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:59.613 16:21:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.613 16:21:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:59.613 16:21:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.613 16:21:33 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:59.613 00:09:59.613 real 0m35.116s 00:09:59.613 user 2m29.218s 00:09:59.613 sys 0m8.412s 00:09:59.613 16:21:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:59.613 16:21:33 -- common/autotest_common.sh@10 -- # set +x 00:09:59.613 ************************************ 00:09:59.613 END TEST nvmf_ns_hotplug_stress 00:09:59.613 ************************************ 00:09:59.613 16:21:33 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:59.613 16:21:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:59.613 16:21:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:59.613 16:21:33 -- common/autotest_common.sh@10 -- # set +x 00:09:59.613 ************************************ 00:09:59.613 START TEST nvmf_connect_stress 00:09:59.613 ************************************ 00:09:59.613 16:21:33 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:59.872 * Looking for test storage... 00:09:59.872 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:59.872 16:21:33 -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:59.872 16:21:33 -- nvmf/common.sh@7 -- # uname -s 00:09:59.872 16:21:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:59.872 16:21:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:59.872 16:21:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:59.872 16:21:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:59.872 16:21:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:59.872 16:21:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:59.872 16:21:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:59.872 16:21:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:59.872 16:21:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:59.872 16:21:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:59.872 16:21:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:09:59.872 16:21:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:09:59.872 16:21:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:59.872 16:21:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:59.872 16:21:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:59.872 16:21:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:59.872 16:21:33 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:59.872 16:21:33 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:59.872 16:21:33 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:59.872 16:21:33 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:59.872 16:21:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.873 16:21:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.873 16:21:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.873 16:21:33 -- paths/export.sh@5 -- # export PATH 00:09:59.873 16:21:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.873 16:21:33 -- nvmf/common.sh@47 -- # : 0 00:09:59.873 16:21:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:59.873 16:21:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:59.873 16:21:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:59.873 16:21:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:59.873 16:21:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:59.873 16:21:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:59.873 16:21:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:59.873 16:21:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:59.873 16:21:33 -- target/connect_stress.sh@12 -- # nvmftestinit 00:09:59.873 16:21:33 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:59.873 16:21:33 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:59.873 16:21:33 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:59.873 16:21:33 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:59.873 16:21:33 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:59.873 16:21:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.873 16:21:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:59.873 16:21:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.873 16:21:33 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:09:59.873 16:21:33 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:09:59.873 16:21:33 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:09:59.873 16:21:33 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:09:59.873 16:21:33 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:09:59.873 16:21:33 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:09:59.873 16:21:33 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:59.873 16:21:33 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:59.873 16:21:33 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:59.873 16:21:33 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:59.873 16:21:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:59.873 16:21:33 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:59.873 16:21:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:59.873 16:21:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:59.873 16:21:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:59.873 16:21:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:59.873 16:21:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:59.873 16:21:33 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:59.873 16:21:33 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:59.873 16:21:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:59.873 Cannot find device "nvmf_tgt_br" 00:09:59.873 16:21:33 -- nvmf/common.sh@155 -- # true 00:09:59.873 16:21:33 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:59.873 Cannot find device "nvmf_tgt_br2" 00:09:59.873 16:21:33 -- nvmf/common.sh@156 -- # true 00:09:59.873 16:21:33 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:59.873 16:21:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:59.873 Cannot find device "nvmf_tgt_br" 00:09:59.873 16:21:33 -- nvmf/common.sh@158 -- # true 00:09:59.873 16:21:33 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:59.873 Cannot find device "nvmf_tgt_br2" 00:09:59.873 16:21:33 -- nvmf/common.sh@159 -- # true 00:09:59.873 16:21:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:59.873 16:21:33 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:59.873 16:21:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:59.873 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:59.873 16:21:33 -- nvmf/common.sh@162 -- # true 00:09:59.873 16:21:33 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:59.873 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:59.873 16:21:33 -- nvmf/common.sh@163 -- # true 00:09:59.873 16:21:33 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:59.873 16:21:33 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:59.873 16:21:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:59.873 16:21:33 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:00.131 16:21:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:00.131 16:21:33 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:00.131 16:21:33 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:00.131 16:21:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:00.132 16:21:33 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:00.132 16:21:34 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:00.132 16:21:34 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:00.132 16:21:34 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:00.132 16:21:34 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:00.132 16:21:34 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:00.132 16:21:34 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:00.132 16:21:34 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:00.132 16:21:34 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:00.132 16:21:34 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:00.132 16:21:34 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:00.132 16:21:34 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:00.132 16:21:34 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:00.132 16:21:34 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:00.132 16:21:34 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:00.132 16:21:34 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:00.132 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:00.132 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:10:00.132 00:10:00.132 --- 10.0.0.2 ping statistics --- 00:10:00.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.132 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:10:00.132 16:21:34 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:00.132 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:00.132 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:10:00.132 00:10:00.132 --- 10.0.0.3 ping statistics --- 00:10:00.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.132 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:10:00.132 16:21:34 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:00.132 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:00.132 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:10:00.132 00:10:00.132 --- 10.0.0.1 ping statistics --- 00:10:00.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.132 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:10:00.132 16:21:34 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:00.132 16:21:34 -- nvmf/common.sh@422 -- # return 0 00:10:00.132 16:21:34 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:00.132 16:21:34 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:00.132 16:21:34 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:00.132 16:21:34 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:00.132 16:21:34 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:00.132 16:21:34 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:00.132 16:21:34 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:00.132 16:21:34 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:00.132 16:21:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:00.132 16:21:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:00.132 16:21:34 -- common/autotest_common.sh@10 -- # set +x 00:10:00.132 16:21:34 -- nvmf/common.sh@470 -- # nvmfpid=69714 00:10:00.132 16:21:34 -- nvmf/common.sh@471 -- # waitforlisten 69714 00:10:00.132 16:21:34 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:00.132 16:21:34 -- common/autotest_common.sh@817 -- # '[' -z 69714 ']' 00:10:00.132 16:21:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.132 16:21:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:00.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.132 16:21:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.132 16:21:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:00.132 16:21:34 -- common/autotest_common.sh@10 -- # set +x 00:10:00.391 [2024-04-17 16:21:34.217030] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:10:00.392 [2024-04-17 16:21:34.217167] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:00.392 [2024-04-17 16:21:34.362588] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:00.651 [2024-04-17 16:21:34.484446] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:00.651 [2024-04-17 16:21:34.484513] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:00.651 [2024-04-17 16:21:34.484524] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:00.651 [2024-04-17 16:21:34.484532] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:00.651 [2024-04-17 16:21:34.484539] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:00.651 [2024-04-17 16:21:34.484726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:00.651 [2024-04-17 16:21:34.484861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:00.651 [2024-04-17 16:21:34.484864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:01.216 16:21:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:01.216 16:21:35 -- common/autotest_common.sh@850 -- # return 0 00:10:01.216 16:21:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:01.216 16:21:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:01.216 16:21:35 -- common/autotest_common.sh@10 -- # set +x 00:10:01.475 16:21:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:01.475 16:21:35 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:01.475 16:21:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:01.475 16:21:35 -- common/autotest_common.sh@10 -- # set +x 00:10:01.475 [2024-04-17 16:21:35.268060] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:01.475 16:21:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:01.475 16:21:35 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:01.475 16:21:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:01.475 16:21:35 -- common/autotest_common.sh@10 -- # set +x 00:10:01.475 16:21:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:01.475 16:21:35 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:01.475 16:21:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:01.475 16:21:35 -- common/autotest_common.sh@10 -- # set +x 00:10:01.475 [2024-04-17 16:21:35.286058] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:01.475 16:21:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:01.475 16:21:35 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:01.475 16:21:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:01.475 16:21:35 -- common/autotest_common.sh@10 -- # set +x 00:10:01.475 NULL1 00:10:01.475 16:21:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:01.475 16:21:35 -- target/connect_stress.sh@21 -- # PERF_PID=69772 00:10:01.475 16:21:35 -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:01.475 16:21:35 -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:10:01.475 16:21:35 -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:10:01.475 16:21:35 -- target/connect_stress.sh@27 -- # seq 1 20 00:10:01.475 16:21:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:01.475 16:21:35 -- target/connect_stress.sh@28 -- # cat 00:10:01.475 16:21:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:01.475 16:21:35 -- target/connect_stress.sh@28 -- # cat 00:10:01.475 16:21:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:01.475 16:21:35 -- target/connect_stress.sh@28 -- # cat 00:10:01.475 16:21:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:01.475 16:21:35 -- target/connect_stress.sh@28 -- # cat 00:10:01.475 16:21:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:01.475 16:21:35 -- target/connect_stress.sh@28 -- # cat 00:10:01.475 16:21:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:01.475 16:21:35 -- target/connect_stress.sh@28 -- # cat 00:10:01.475 16:21:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:01.475 16:21:35 -- target/connect_stress.sh@28 -- # cat 00:10:01.475 16:21:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:01.475 16:21:35 -- target/connect_stress.sh@28 -- # cat 00:10:01.475 16:21:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:01.475 16:21:35 -- target/connect_stress.sh@28 -- # cat 00:10:01.475 16:21:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:01.475 16:21:35 -- target/connect_stress.sh@28 -- # cat 00:10:01.475 16:21:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:01.475 16:21:35 -- target/connect_stress.sh@28 -- # cat 00:10:01.475 16:21:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:01.475 16:21:35 -- target/connect_stress.sh@28 -- # cat 00:10:01.475 16:21:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:01.475 16:21:35 -- target/connect_stress.sh@28 -- # cat 00:10:01.475 16:21:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:01.475 16:21:35 -- target/connect_stress.sh@28 -- # cat 00:10:01.475 16:21:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:01.475 16:21:35 -- target/connect_stress.sh@28 -- # cat 00:10:01.475 16:21:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:01.475 16:21:35 -- target/connect_stress.sh@28 -- # cat 00:10:01.475 16:21:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:01.475 16:21:35 -- target/connect_stress.sh@28 -- # cat 00:10:01.475 16:21:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:01.475 16:21:35 -- target/connect_stress.sh@28 -- # cat 00:10:01.476 16:21:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:01.476 16:21:35 -- target/connect_stress.sh@28 -- # cat 00:10:01.476 16:21:35 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:01.476 16:21:35 -- target/connect_stress.sh@28 -- # cat 00:10:01.476 16:21:35 -- target/connect_stress.sh@34 -- # kill -0 69772 00:10:01.476 16:21:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:01.476 16:21:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:01.476 16:21:35 -- common/autotest_common.sh@10 -- # set +x 00:10:01.734 16:21:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:01.734 16:21:35 -- target/connect_stress.sh@34 -- # kill -0 69772 00:10:01.734 16:21:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:01.734 16:21:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:01.734 16:21:35 -- common/autotest_common.sh@10 -- # set +x 00:10:01.993 16:21:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:01.993 16:21:36 -- target/connect_stress.sh@34 -- # kill -0 69772 00:10:01.993 16:21:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:01.993 16:21:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:01.993 16:21:36 -- common/autotest_common.sh@10 -- # set +x 00:10:02.560 16:21:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:02.560 16:21:36 -- target/connect_stress.sh@34 -- # kill -0 69772 00:10:02.560 16:21:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:02.560 16:21:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:02.560 16:21:36 -- common/autotest_common.sh@10 -- # set +x 00:10:02.819 16:21:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:02.819 16:21:36 -- target/connect_stress.sh@34 -- # kill -0 69772 00:10:02.819 16:21:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:02.819 16:21:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:02.819 16:21:36 -- common/autotest_common.sh@10 -- # set +x 00:10:03.077 16:21:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:03.077 16:21:36 -- target/connect_stress.sh@34 -- # kill -0 69772 00:10:03.077 16:21:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:03.077 16:21:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:03.077 16:21:36 -- common/autotest_common.sh@10 -- # set +x 00:10:03.336 16:21:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:03.336 16:21:37 -- target/connect_stress.sh@34 -- # kill -0 69772 00:10:03.336 16:21:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:03.336 16:21:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:03.336 16:21:37 -- common/autotest_common.sh@10 -- # set +x 00:10:03.594 16:21:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:03.594 16:21:37 -- target/connect_stress.sh@34 -- # kill -0 69772 00:10:03.594 16:21:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:03.594 16:21:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:03.594 16:21:37 -- common/autotest_common.sh@10 -- # set +x 00:10:04.161 16:21:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:04.161 16:21:37 -- target/connect_stress.sh@34 -- # kill -0 69772 00:10:04.161 16:21:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:04.161 16:21:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:04.161 16:21:37 -- common/autotest_common.sh@10 -- # set +x 00:10:04.428 16:21:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:04.428 16:21:38 -- target/connect_stress.sh@34 -- # kill -0 69772 00:10:04.428 16:21:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:04.428 16:21:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:04.428 16:21:38 -- common/autotest_common.sh@10 -- # set +x 00:10:04.717 16:21:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:04.717 16:21:38 -- target/connect_stress.sh@34 -- # kill -0 69772 00:10:04.717 16:21:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:04.717 16:21:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:04.717 16:21:38 -- common/autotest_common.sh@10 -- # set +x 00:10:04.982 16:21:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:04.982 16:21:38 -- target/connect_stress.sh@34 -- # kill -0 69772 00:10:04.982 16:21:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:04.982 16:21:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:04.982 16:21:38 -- common/autotest_common.sh@10 -- # set +x 00:10:05.240 16:21:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:05.240 16:21:39 -- target/connect_stress.sh@34 -- # kill -0 69772 00:10:05.240 16:21:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:05.240 16:21:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:05.240 16:21:39 -- common/autotest_common.sh@10 -- # set +x 00:10:05.806 16:21:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:05.806 16:21:39 -- target/connect_stress.sh@34 -- # kill -0 69772 00:10:05.806 16:21:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:05.806 16:21:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:05.806 16:21:39 -- common/autotest_common.sh@10 -- # set +x 00:10:06.064 16:21:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:06.064 16:21:39 -- target/connect_stress.sh@34 -- # kill -0 69772 00:10:06.064 16:21:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:06.064 16:21:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:06.064 16:21:39 -- common/autotest_common.sh@10 -- # set +x 00:10:06.322 16:21:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:06.322 16:21:40 -- target/connect_stress.sh@34 -- # kill -0 69772 00:10:06.322 16:21:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:06.322 16:21:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:06.322 16:21:40 -- common/autotest_common.sh@10 -- # set +x 00:10:06.580 16:21:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:06.580 16:21:40 -- target/connect_stress.sh@34 -- # kill -0 69772 00:10:06.580 16:21:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:06.580 16:21:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:06.580 16:21:40 -- common/autotest_common.sh@10 -- # set +x 00:10:07.144 16:21:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:07.144 16:21:40 -- target/connect_stress.sh@34 -- # kill -0 69772 00:10:07.144 16:21:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:07.144 16:21:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:07.144 16:21:40 -- common/autotest_common.sh@10 -- # set +x 00:10:07.402 16:21:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:07.402 16:21:41 -- target/connect_stress.sh@34 -- # kill -0 69772 00:10:07.402 16:21:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:07.402 16:21:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:07.402 16:21:41 -- common/autotest_common.sh@10 -- # set +x 00:10:07.659 16:21:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:07.659 16:21:41 -- target/connect_stress.sh@34 -- # kill -0 69772 00:10:07.659 16:21:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:07.659 16:21:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:07.659 16:21:41 -- common/autotest_common.sh@10 -- # set +x 00:10:07.917 16:21:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:07.917 16:21:41 -- target/connect_stress.sh@34 -- # kill -0 69772 00:10:07.917 16:21:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:07.917 16:21:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:07.917 16:21:41 -- common/autotest_common.sh@10 -- # set +x 00:10:08.175 16:21:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:08.175 16:21:42 -- target/connect_stress.sh@34 -- # kill -0 69772 00:10:08.175 16:21:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:08.175 16:21:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:08.175 16:21:42 -- common/autotest_common.sh@10 -- # set +x 00:10:08.740 16:21:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:08.740 16:21:42 -- target/connect_stress.sh@34 -- # kill -0 69772 00:10:08.740 16:21:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:08.740 16:21:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:08.740 16:21:42 -- common/autotest_common.sh@10 -- # set +x 00:10:08.997 16:21:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:08.997 16:21:42 -- target/connect_stress.sh@34 -- # kill -0 69772 00:10:08.997 16:21:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:08.997 16:21:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:08.997 16:21:42 -- common/autotest_common.sh@10 -- # set +x 00:10:09.255 16:21:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:09.255 16:21:43 -- target/connect_stress.sh@34 -- # kill -0 69772 00:10:09.255 16:21:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:09.255 16:21:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:09.255 16:21:43 -- common/autotest_common.sh@10 -- # set +x 00:10:09.512 16:21:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:09.512 16:21:43 -- target/connect_stress.sh@34 -- # kill -0 69772 00:10:09.512 16:21:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:09.512 16:21:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:09.512 16:21:43 -- common/autotest_common.sh@10 -- # set +x 00:10:09.840 16:21:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:09.840 16:21:43 -- target/connect_stress.sh@34 -- # kill -0 69772 00:10:09.840 16:21:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:09.840 16:21:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:09.840 16:21:43 -- common/autotest_common.sh@10 -- # set +x 00:10:10.114 16:21:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:10.114 16:21:44 -- target/connect_stress.sh@34 -- # kill -0 69772 00:10:10.114 16:21:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:10.114 16:21:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:10.114 16:21:44 -- common/autotest_common.sh@10 -- # set +x 00:10:10.678 16:21:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:10.678 16:21:44 -- target/connect_stress.sh@34 -- # kill -0 69772 00:10:10.678 16:21:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:10.678 16:21:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:10.678 16:21:44 -- common/autotest_common.sh@10 -- # set +x 00:10:10.935 16:21:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:10.935 16:21:44 -- target/connect_stress.sh@34 -- # kill -0 69772 00:10:10.935 16:21:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:10.935 16:21:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:10.935 16:21:44 -- common/autotest_common.sh@10 -- # set +x 00:10:11.193 16:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:11.193 16:21:45 -- target/connect_stress.sh@34 -- # kill -0 69772 00:10:11.193 16:21:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:11.193 16:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:11.193 16:21:45 -- common/autotest_common.sh@10 -- # set +x 00:10:11.451 16:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:11.451 16:21:45 -- target/connect_stress.sh@34 -- # kill -0 69772 00:10:11.451 16:21:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:11.451 16:21:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:11.451 16:21:45 -- common/autotest_common.sh@10 -- # set +x 00:10:11.709 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:11.709 16:21:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:11.709 16:21:45 -- target/connect_stress.sh@34 -- # kill -0 69772 00:10:11.709 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (69772) - No such process 00:10:11.709 16:21:45 -- target/connect_stress.sh@38 -- # wait 69772 00:10:11.709 16:21:45 -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:10:11.709 16:21:45 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:11.709 16:21:45 -- target/connect_stress.sh@43 -- # nvmftestfini 00:10:11.709 16:21:45 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:11.709 16:21:45 -- nvmf/common.sh@117 -- # sync 00:10:11.968 16:21:45 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:11.968 16:21:45 -- nvmf/common.sh@120 -- # set +e 00:10:11.968 16:21:45 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:11.968 16:21:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:11.968 rmmod nvme_tcp 00:10:11.968 rmmod nvme_fabrics 00:10:11.968 rmmod nvme_keyring 00:10:11.968 16:21:45 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:11.968 16:21:45 -- nvmf/common.sh@124 -- # set -e 00:10:11.968 16:21:45 -- nvmf/common.sh@125 -- # return 0 00:10:11.968 16:21:45 -- nvmf/common.sh@478 -- # '[' -n 69714 ']' 00:10:11.968 16:21:45 -- nvmf/common.sh@479 -- # killprocess 69714 00:10:11.968 16:21:45 -- common/autotest_common.sh@936 -- # '[' -z 69714 ']' 00:10:11.968 16:21:45 -- common/autotest_common.sh@940 -- # kill -0 69714 00:10:11.968 16:21:45 -- common/autotest_common.sh@941 -- # uname 00:10:11.968 16:21:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:11.968 16:21:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69714 00:10:11.968 killing process with pid 69714 00:10:11.968 16:21:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:11.968 16:21:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:11.968 16:21:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69714' 00:10:11.968 16:21:45 -- common/autotest_common.sh@955 -- # kill 69714 00:10:11.968 16:21:45 -- common/autotest_common.sh@960 -- # wait 69714 00:10:12.226 16:21:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:12.226 16:21:46 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:12.226 16:21:46 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:12.226 16:21:46 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:12.226 16:21:46 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:12.226 16:21:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.226 16:21:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:12.226 16:21:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.226 16:21:46 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:12.226 00:10:12.226 real 0m12.498s 00:10:12.226 user 0m41.177s 00:10:12.226 sys 0m3.425s 00:10:12.226 16:21:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:12.226 16:21:46 -- common/autotest_common.sh@10 -- # set +x 00:10:12.226 ************************************ 00:10:12.226 END TEST nvmf_connect_stress 00:10:12.226 ************************************ 00:10:12.226 16:21:46 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:12.226 16:21:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:12.226 16:21:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:12.226 16:21:46 -- common/autotest_common.sh@10 -- # set +x 00:10:12.226 ************************************ 00:10:12.226 START TEST nvmf_fused_ordering 00:10:12.226 ************************************ 00:10:12.226 16:21:46 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:12.484 * Looking for test storage... 00:10:12.484 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:12.484 16:21:46 -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:12.484 16:21:46 -- nvmf/common.sh@7 -- # uname -s 00:10:12.484 16:21:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:12.484 16:21:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:12.484 16:21:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:12.484 16:21:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:12.484 16:21:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:12.484 16:21:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:12.484 16:21:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:12.484 16:21:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:12.484 16:21:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:12.484 16:21:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:12.484 16:21:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:10:12.484 16:21:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:10:12.484 16:21:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:12.484 16:21:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:12.484 16:21:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:12.484 16:21:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:12.484 16:21:46 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:12.484 16:21:46 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:12.484 16:21:46 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:12.484 16:21:46 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:12.484 16:21:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.484 16:21:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.484 16:21:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.484 16:21:46 -- paths/export.sh@5 -- # export PATH 00:10:12.484 16:21:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.484 16:21:46 -- nvmf/common.sh@47 -- # : 0 00:10:12.484 16:21:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:12.484 16:21:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:12.484 16:21:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:12.484 16:21:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:12.484 16:21:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:12.484 16:21:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:12.484 16:21:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:12.484 16:21:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:12.484 16:21:46 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:10:12.484 16:21:46 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:12.484 16:21:46 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:12.484 16:21:46 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:12.484 16:21:46 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:12.484 16:21:46 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:12.484 16:21:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.484 16:21:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:12.484 16:21:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.484 16:21:46 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:10:12.484 16:21:46 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:10:12.484 16:21:46 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:10:12.484 16:21:46 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:10:12.484 16:21:46 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:10:12.484 16:21:46 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:10:12.484 16:21:46 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:12.484 16:21:46 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:12.484 16:21:46 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:12.484 16:21:46 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:12.484 16:21:46 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:12.484 16:21:46 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:12.484 16:21:46 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:12.484 16:21:46 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:12.484 16:21:46 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:12.484 16:21:46 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:12.484 16:21:46 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:12.484 16:21:46 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:12.484 16:21:46 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:12.484 16:21:46 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:12.484 Cannot find device "nvmf_tgt_br" 00:10:12.484 16:21:46 -- nvmf/common.sh@155 -- # true 00:10:12.484 16:21:46 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:12.484 Cannot find device "nvmf_tgt_br2" 00:10:12.484 16:21:46 -- nvmf/common.sh@156 -- # true 00:10:12.484 16:21:46 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:12.484 16:21:46 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:12.484 Cannot find device "nvmf_tgt_br" 00:10:12.484 16:21:46 -- nvmf/common.sh@158 -- # true 00:10:12.484 16:21:46 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:12.484 Cannot find device "nvmf_tgt_br2" 00:10:12.484 16:21:46 -- nvmf/common.sh@159 -- # true 00:10:12.484 16:21:46 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:12.484 16:21:46 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:12.484 16:21:46 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:12.485 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:12.485 16:21:46 -- nvmf/common.sh@162 -- # true 00:10:12.485 16:21:46 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:12.485 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:12.485 16:21:46 -- nvmf/common.sh@163 -- # true 00:10:12.485 16:21:46 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:12.485 16:21:46 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:12.485 16:21:46 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:12.485 16:21:46 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:12.485 16:21:46 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:12.485 16:21:46 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:12.742 16:21:46 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:12.742 16:21:46 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:12.742 16:21:46 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:12.742 16:21:46 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:12.742 16:21:46 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:12.742 16:21:46 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:12.742 16:21:46 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:12.742 16:21:46 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:12.742 16:21:46 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:12.742 16:21:46 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:12.742 16:21:46 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:12.742 16:21:46 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:12.742 16:21:46 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:12.742 16:21:46 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:12.742 16:21:46 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:12.742 16:21:46 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:12.742 16:21:46 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:12.742 16:21:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:12.742 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:12.742 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:10:12.742 00:10:12.742 --- 10.0.0.2 ping statistics --- 00:10:12.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.742 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:10:12.742 16:21:46 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:12.742 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:12.742 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:10:12.742 00:10:12.743 --- 10.0.0.3 ping statistics --- 00:10:12.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.743 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:10:12.743 16:21:46 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:12.743 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:12.743 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:10:12.743 00:10:12.743 --- 10.0.0.1 ping statistics --- 00:10:12.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.743 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:10:12.743 16:21:46 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:12.743 16:21:46 -- nvmf/common.sh@422 -- # return 0 00:10:12.743 16:21:46 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:12.743 16:21:46 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:12.743 16:21:46 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:12.743 16:21:46 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:12.743 16:21:46 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:12.743 16:21:46 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:12.743 16:21:46 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:12.743 16:21:46 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:10:12.743 16:21:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:12.743 16:21:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:12.743 16:21:46 -- common/autotest_common.sh@10 -- # set +x 00:10:12.743 16:21:46 -- nvmf/common.sh@470 -- # nvmfpid=70106 00:10:12.743 16:21:46 -- nvmf/common.sh@471 -- # waitforlisten 70106 00:10:12.743 16:21:46 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:12.743 16:21:46 -- common/autotest_common.sh@817 -- # '[' -z 70106 ']' 00:10:12.743 16:21:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.743 16:21:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:12.743 16:21:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.743 16:21:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:12.743 16:21:46 -- common/autotest_common.sh@10 -- # set +x 00:10:12.743 [2024-04-17 16:21:46.755591] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:10:12.743 [2024-04-17 16:21:46.755700] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:13.001 [2024-04-17 16:21:46.895568] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.001 [2024-04-17 16:21:47.017482] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:13.001 [2024-04-17 16:21:47.017548] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:13.001 [2024-04-17 16:21:47.017560] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:13.001 [2024-04-17 16:21:47.017569] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:13.001 [2024-04-17 16:21:47.017577] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:13.001 [2024-04-17 16:21:47.017608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:13.937 16:21:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:13.937 16:21:47 -- common/autotest_common.sh@850 -- # return 0 00:10:13.937 16:21:47 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:13.937 16:21:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:13.937 16:21:47 -- common/autotest_common.sh@10 -- # set +x 00:10:13.937 16:21:47 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:13.937 16:21:47 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:13.937 16:21:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:13.937 16:21:47 -- common/autotest_common.sh@10 -- # set +x 00:10:13.937 [2024-04-17 16:21:47.785745] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:13.937 16:21:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:13.938 16:21:47 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:13.938 16:21:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:13.938 16:21:47 -- common/autotest_common.sh@10 -- # set +x 00:10:13.938 16:21:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:13.938 16:21:47 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:13.938 16:21:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:13.938 16:21:47 -- common/autotest_common.sh@10 -- # set +x 00:10:13.938 [2024-04-17 16:21:47.805888] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:13.938 16:21:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:13.938 16:21:47 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:13.938 16:21:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:13.938 16:21:47 -- common/autotest_common.sh@10 -- # set +x 00:10:13.938 NULL1 00:10:13.938 16:21:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:13.938 16:21:47 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:10:13.938 16:21:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:13.938 16:21:47 -- common/autotest_common.sh@10 -- # set +x 00:10:13.938 16:21:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:13.938 16:21:47 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:13.938 16:21:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:13.938 16:21:47 -- common/autotest_common.sh@10 -- # set +x 00:10:13.938 16:21:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:13.938 16:21:47 -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:13.938 [2024-04-17 16:21:47.858131] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:10:13.938 [2024-04-17 16:21:47.858183] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70156 ] 00:10:14.505 Attached to nqn.2016-06.io.spdk:cnode1 00:10:14.505 Namespace ID: 1 size: 1GB 00:10:14.505 fused_ordering(0) 00:10:14.505 fused_ordering(1) 00:10:14.505 fused_ordering(2) 00:10:14.505 fused_ordering(3) 00:10:14.505 fused_ordering(4) 00:10:14.505 fused_ordering(5) 00:10:14.505 fused_ordering(6) 00:10:14.505 fused_ordering(7) 00:10:14.505 fused_ordering(8) 00:10:14.505 fused_ordering(9) 00:10:14.505 fused_ordering(10) 00:10:14.505 fused_ordering(11) 00:10:14.505 fused_ordering(12) 00:10:14.505 fused_ordering(13) 00:10:14.505 fused_ordering(14) 00:10:14.505 fused_ordering(15) 00:10:14.505 fused_ordering(16) 00:10:14.505 fused_ordering(17) 00:10:14.505 fused_ordering(18) 00:10:14.505 fused_ordering(19) 00:10:14.505 fused_ordering(20) 00:10:14.505 fused_ordering(21) 00:10:14.505 fused_ordering(22) 00:10:14.505 fused_ordering(23) 00:10:14.505 fused_ordering(24) 00:10:14.505 fused_ordering(25) 00:10:14.505 fused_ordering(26) 00:10:14.505 fused_ordering(27) 00:10:14.505 fused_ordering(28) 00:10:14.505 fused_ordering(29) 00:10:14.505 fused_ordering(30) 00:10:14.505 fused_ordering(31) 00:10:14.505 fused_ordering(32) 00:10:14.505 fused_ordering(33) 00:10:14.505 fused_ordering(34) 00:10:14.505 fused_ordering(35) 00:10:14.505 fused_ordering(36) 00:10:14.505 fused_ordering(37) 00:10:14.505 fused_ordering(38) 00:10:14.505 fused_ordering(39) 00:10:14.505 fused_ordering(40) 00:10:14.505 fused_ordering(41) 00:10:14.505 fused_ordering(42) 00:10:14.505 fused_ordering(43) 00:10:14.505 fused_ordering(44) 00:10:14.505 fused_ordering(45) 00:10:14.505 fused_ordering(46) 00:10:14.505 fused_ordering(47) 00:10:14.505 fused_ordering(48) 00:10:14.505 fused_ordering(49) 00:10:14.505 fused_ordering(50) 00:10:14.505 fused_ordering(51) 00:10:14.505 fused_ordering(52) 00:10:14.505 fused_ordering(53) 00:10:14.505 fused_ordering(54) 00:10:14.505 fused_ordering(55) 00:10:14.505 fused_ordering(56) 00:10:14.505 fused_ordering(57) 00:10:14.505 fused_ordering(58) 00:10:14.505 fused_ordering(59) 00:10:14.505 fused_ordering(60) 00:10:14.505 fused_ordering(61) 00:10:14.505 fused_ordering(62) 00:10:14.505 fused_ordering(63) 00:10:14.505 fused_ordering(64) 00:10:14.505 fused_ordering(65) 00:10:14.505 fused_ordering(66) 00:10:14.505 fused_ordering(67) 00:10:14.505 fused_ordering(68) 00:10:14.505 fused_ordering(69) 00:10:14.505 fused_ordering(70) 00:10:14.505 fused_ordering(71) 00:10:14.505 fused_ordering(72) 00:10:14.505 fused_ordering(73) 00:10:14.505 fused_ordering(74) 00:10:14.505 fused_ordering(75) 00:10:14.505 fused_ordering(76) 00:10:14.505 fused_ordering(77) 00:10:14.505 fused_ordering(78) 00:10:14.505 fused_ordering(79) 00:10:14.505 fused_ordering(80) 00:10:14.505 fused_ordering(81) 00:10:14.505 fused_ordering(82) 00:10:14.505 fused_ordering(83) 00:10:14.505 fused_ordering(84) 00:10:14.505 fused_ordering(85) 00:10:14.505 fused_ordering(86) 00:10:14.505 fused_ordering(87) 00:10:14.505 fused_ordering(88) 00:10:14.505 fused_ordering(89) 00:10:14.505 fused_ordering(90) 00:10:14.505 fused_ordering(91) 00:10:14.505 fused_ordering(92) 00:10:14.505 fused_ordering(93) 00:10:14.505 fused_ordering(94) 00:10:14.505 fused_ordering(95) 00:10:14.505 fused_ordering(96) 00:10:14.505 fused_ordering(97) 00:10:14.505 fused_ordering(98) 00:10:14.505 fused_ordering(99) 00:10:14.505 fused_ordering(100) 00:10:14.505 fused_ordering(101) 00:10:14.505 fused_ordering(102) 00:10:14.505 fused_ordering(103) 00:10:14.505 fused_ordering(104) 00:10:14.505 fused_ordering(105) 00:10:14.505 fused_ordering(106) 00:10:14.505 fused_ordering(107) 00:10:14.505 fused_ordering(108) 00:10:14.505 fused_ordering(109) 00:10:14.505 fused_ordering(110) 00:10:14.505 fused_ordering(111) 00:10:14.505 fused_ordering(112) 00:10:14.505 fused_ordering(113) 00:10:14.505 fused_ordering(114) 00:10:14.505 fused_ordering(115) 00:10:14.505 fused_ordering(116) 00:10:14.505 fused_ordering(117) 00:10:14.505 fused_ordering(118) 00:10:14.505 fused_ordering(119) 00:10:14.505 fused_ordering(120) 00:10:14.505 fused_ordering(121) 00:10:14.505 fused_ordering(122) 00:10:14.505 fused_ordering(123) 00:10:14.505 fused_ordering(124) 00:10:14.505 fused_ordering(125) 00:10:14.505 fused_ordering(126) 00:10:14.505 fused_ordering(127) 00:10:14.505 fused_ordering(128) 00:10:14.505 fused_ordering(129) 00:10:14.505 fused_ordering(130) 00:10:14.505 fused_ordering(131) 00:10:14.505 fused_ordering(132) 00:10:14.505 fused_ordering(133) 00:10:14.505 fused_ordering(134) 00:10:14.505 fused_ordering(135) 00:10:14.505 fused_ordering(136) 00:10:14.505 fused_ordering(137) 00:10:14.505 fused_ordering(138) 00:10:14.505 fused_ordering(139) 00:10:14.505 fused_ordering(140) 00:10:14.505 fused_ordering(141) 00:10:14.505 fused_ordering(142) 00:10:14.505 fused_ordering(143) 00:10:14.505 fused_ordering(144) 00:10:14.505 fused_ordering(145) 00:10:14.505 fused_ordering(146) 00:10:14.505 fused_ordering(147) 00:10:14.505 fused_ordering(148) 00:10:14.505 fused_ordering(149) 00:10:14.505 fused_ordering(150) 00:10:14.505 fused_ordering(151) 00:10:14.505 fused_ordering(152) 00:10:14.505 fused_ordering(153) 00:10:14.505 fused_ordering(154) 00:10:14.505 fused_ordering(155) 00:10:14.505 fused_ordering(156) 00:10:14.505 fused_ordering(157) 00:10:14.505 fused_ordering(158) 00:10:14.505 fused_ordering(159) 00:10:14.505 fused_ordering(160) 00:10:14.505 fused_ordering(161) 00:10:14.505 fused_ordering(162) 00:10:14.505 fused_ordering(163) 00:10:14.505 fused_ordering(164) 00:10:14.505 fused_ordering(165) 00:10:14.505 fused_ordering(166) 00:10:14.505 fused_ordering(167) 00:10:14.505 fused_ordering(168) 00:10:14.505 fused_ordering(169) 00:10:14.505 fused_ordering(170) 00:10:14.505 fused_ordering(171) 00:10:14.505 fused_ordering(172) 00:10:14.506 fused_ordering(173) 00:10:14.506 fused_ordering(174) 00:10:14.506 fused_ordering(175) 00:10:14.506 fused_ordering(176) 00:10:14.506 fused_ordering(177) 00:10:14.506 fused_ordering(178) 00:10:14.506 fused_ordering(179) 00:10:14.506 fused_ordering(180) 00:10:14.506 fused_ordering(181) 00:10:14.506 fused_ordering(182) 00:10:14.506 fused_ordering(183) 00:10:14.506 fused_ordering(184) 00:10:14.506 fused_ordering(185) 00:10:14.506 fused_ordering(186) 00:10:14.506 fused_ordering(187) 00:10:14.506 fused_ordering(188) 00:10:14.506 fused_ordering(189) 00:10:14.506 fused_ordering(190) 00:10:14.506 fused_ordering(191) 00:10:14.506 fused_ordering(192) 00:10:14.506 fused_ordering(193) 00:10:14.506 fused_ordering(194) 00:10:14.506 fused_ordering(195) 00:10:14.506 fused_ordering(196) 00:10:14.506 fused_ordering(197) 00:10:14.506 fused_ordering(198) 00:10:14.506 fused_ordering(199) 00:10:14.506 fused_ordering(200) 00:10:14.506 fused_ordering(201) 00:10:14.506 fused_ordering(202) 00:10:14.506 fused_ordering(203) 00:10:14.506 fused_ordering(204) 00:10:14.506 fused_ordering(205) 00:10:14.764 fused_ordering(206) 00:10:14.764 fused_ordering(207) 00:10:14.764 fused_ordering(208) 00:10:14.764 fused_ordering(209) 00:10:14.764 fused_ordering(210) 00:10:14.764 fused_ordering(211) 00:10:14.764 fused_ordering(212) 00:10:14.764 fused_ordering(213) 00:10:14.764 fused_ordering(214) 00:10:14.764 fused_ordering(215) 00:10:14.764 fused_ordering(216) 00:10:14.764 fused_ordering(217) 00:10:14.764 fused_ordering(218) 00:10:14.764 fused_ordering(219) 00:10:14.764 fused_ordering(220) 00:10:14.764 fused_ordering(221) 00:10:14.764 fused_ordering(222) 00:10:14.764 fused_ordering(223) 00:10:14.764 fused_ordering(224) 00:10:14.764 fused_ordering(225) 00:10:14.764 fused_ordering(226) 00:10:14.764 fused_ordering(227) 00:10:14.764 fused_ordering(228) 00:10:14.764 fused_ordering(229) 00:10:14.764 fused_ordering(230) 00:10:14.764 fused_ordering(231) 00:10:14.764 fused_ordering(232) 00:10:14.764 fused_ordering(233) 00:10:14.764 fused_ordering(234) 00:10:14.764 fused_ordering(235) 00:10:14.764 fused_ordering(236) 00:10:14.764 fused_ordering(237) 00:10:14.764 fused_ordering(238) 00:10:14.764 fused_ordering(239) 00:10:14.764 fused_ordering(240) 00:10:14.764 fused_ordering(241) 00:10:14.764 fused_ordering(242) 00:10:14.764 fused_ordering(243) 00:10:14.764 fused_ordering(244) 00:10:14.764 fused_ordering(245) 00:10:14.764 fused_ordering(246) 00:10:14.764 fused_ordering(247) 00:10:14.764 fused_ordering(248) 00:10:14.764 fused_ordering(249) 00:10:14.764 fused_ordering(250) 00:10:14.764 fused_ordering(251) 00:10:14.764 fused_ordering(252) 00:10:14.764 fused_ordering(253) 00:10:14.764 fused_ordering(254) 00:10:14.764 fused_ordering(255) 00:10:14.764 fused_ordering(256) 00:10:14.764 fused_ordering(257) 00:10:14.764 fused_ordering(258) 00:10:14.764 fused_ordering(259) 00:10:14.764 fused_ordering(260) 00:10:14.764 fused_ordering(261) 00:10:14.764 fused_ordering(262) 00:10:14.764 fused_ordering(263) 00:10:14.764 fused_ordering(264) 00:10:14.764 fused_ordering(265) 00:10:14.764 fused_ordering(266) 00:10:14.764 fused_ordering(267) 00:10:14.764 fused_ordering(268) 00:10:14.764 fused_ordering(269) 00:10:14.764 fused_ordering(270) 00:10:14.764 fused_ordering(271) 00:10:14.764 fused_ordering(272) 00:10:14.764 fused_ordering(273) 00:10:14.764 fused_ordering(274) 00:10:14.764 fused_ordering(275) 00:10:14.764 fused_ordering(276) 00:10:14.764 fused_ordering(277) 00:10:14.765 fused_ordering(278) 00:10:14.765 fused_ordering(279) 00:10:14.765 fused_ordering(280) 00:10:14.765 fused_ordering(281) 00:10:14.765 fused_ordering(282) 00:10:14.765 fused_ordering(283) 00:10:14.765 fused_ordering(284) 00:10:14.765 fused_ordering(285) 00:10:14.765 fused_ordering(286) 00:10:14.765 fused_ordering(287) 00:10:14.765 fused_ordering(288) 00:10:14.765 fused_ordering(289) 00:10:14.765 fused_ordering(290) 00:10:14.765 fused_ordering(291) 00:10:14.765 fused_ordering(292) 00:10:14.765 fused_ordering(293) 00:10:14.765 fused_ordering(294) 00:10:14.765 fused_ordering(295) 00:10:14.765 fused_ordering(296) 00:10:14.765 fused_ordering(297) 00:10:14.765 fused_ordering(298) 00:10:14.765 fused_ordering(299) 00:10:14.765 fused_ordering(300) 00:10:14.765 fused_ordering(301) 00:10:14.765 fused_ordering(302) 00:10:14.765 fused_ordering(303) 00:10:14.765 fused_ordering(304) 00:10:14.765 fused_ordering(305) 00:10:14.765 fused_ordering(306) 00:10:14.765 fused_ordering(307) 00:10:14.765 fused_ordering(308) 00:10:14.765 fused_ordering(309) 00:10:14.765 fused_ordering(310) 00:10:14.765 fused_ordering(311) 00:10:14.765 fused_ordering(312) 00:10:14.765 fused_ordering(313) 00:10:14.765 fused_ordering(314) 00:10:14.765 fused_ordering(315) 00:10:14.765 fused_ordering(316) 00:10:14.765 fused_ordering(317) 00:10:14.765 fused_ordering(318) 00:10:14.765 fused_ordering(319) 00:10:14.765 fused_ordering(320) 00:10:14.765 fused_ordering(321) 00:10:14.765 fused_ordering(322) 00:10:14.765 fused_ordering(323) 00:10:14.765 fused_ordering(324) 00:10:14.765 fused_ordering(325) 00:10:14.765 fused_ordering(326) 00:10:14.765 fused_ordering(327) 00:10:14.765 fused_ordering(328) 00:10:14.765 fused_ordering(329) 00:10:14.765 fused_ordering(330) 00:10:14.765 fused_ordering(331) 00:10:14.765 fused_ordering(332) 00:10:14.765 fused_ordering(333) 00:10:14.765 fused_ordering(334) 00:10:14.765 fused_ordering(335) 00:10:14.765 fused_ordering(336) 00:10:14.765 fused_ordering(337) 00:10:14.765 fused_ordering(338) 00:10:14.765 fused_ordering(339) 00:10:14.765 fused_ordering(340) 00:10:14.765 fused_ordering(341) 00:10:14.765 fused_ordering(342) 00:10:14.765 fused_ordering(343) 00:10:14.765 fused_ordering(344) 00:10:14.765 fused_ordering(345) 00:10:14.765 fused_ordering(346) 00:10:14.765 fused_ordering(347) 00:10:14.765 fused_ordering(348) 00:10:14.765 fused_ordering(349) 00:10:14.765 fused_ordering(350) 00:10:14.765 fused_ordering(351) 00:10:14.765 fused_ordering(352) 00:10:14.765 fused_ordering(353) 00:10:14.765 fused_ordering(354) 00:10:14.765 fused_ordering(355) 00:10:14.765 fused_ordering(356) 00:10:14.765 fused_ordering(357) 00:10:14.765 fused_ordering(358) 00:10:14.765 fused_ordering(359) 00:10:14.765 fused_ordering(360) 00:10:14.765 fused_ordering(361) 00:10:14.765 fused_ordering(362) 00:10:14.765 fused_ordering(363) 00:10:14.765 fused_ordering(364) 00:10:14.765 fused_ordering(365) 00:10:14.765 fused_ordering(366) 00:10:14.765 fused_ordering(367) 00:10:14.765 fused_ordering(368) 00:10:14.765 fused_ordering(369) 00:10:14.765 fused_ordering(370) 00:10:14.765 fused_ordering(371) 00:10:14.765 fused_ordering(372) 00:10:14.765 fused_ordering(373) 00:10:14.765 fused_ordering(374) 00:10:14.765 fused_ordering(375) 00:10:14.765 fused_ordering(376) 00:10:14.765 fused_ordering(377) 00:10:14.765 fused_ordering(378) 00:10:14.765 fused_ordering(379) 00:10:14.765 fused_ordering(380) 00:10:14.765 fused_ordering(381) 00:10:14.765 fused_ordering(382) 00:10:14.765 fused_ordering(383) 00:10:14.765 fused_ordering(384) 00:10:14.765 fused_ordering(385) 00:10:14.765 fused_ordering(386) 00:10:14.765 fused_ordering(387) 00:10:14.765 fused_ordering(388) 00:10:14.765 fused_ordering(389) 00:10:14.765 fused_ordering(390) 00:10:14.765 fused_ordering(391) 00:10:14.765 fused_ordering(392) 00:10:14.765 fused_ordering(393) 00:10:14.765 fused_ordering(394) 00:10:14.765 fused_ordering(395) 00:10:14.765 fused_ordering(396) 00:10:14.765 fused_ordering(397) 00:10:14.765 fused_ordering(398) 00:10:14.765 fused_ordering(399) 00:10:14.765 fused_ordering(400) 00:10:14.765 fused_ordering(401) 00:10:14.765 fused_ordering(402) 00:10:14.765 fused_ordering(403) 00:10:14.765 fused_ordering(404) 00:10:14.765 fused_ordering(405) 00:10:14.765 fused_ordering(406) 00:10:14.765 fused_ordering(407) 00:10:14.765 fused_ordering(408) 00:10:14.765 fused_ordering(409) 00:10:14.765 fused_ordering(410) 00:10:15.023 fused_ordering(411) 00:10:15.023 fused_ordering(412) 00:10:15.023 fused_ordering(413) 00:10:15.023 fused_ordering(414) 00:10:15.023 fused_ordering(415) 00:10:15.023 fused_ordering(416) 00:10:15.023 fused_ordering(417) 00:10:15.023 fused_ordering(418) 00:10:15.023 fused_ordering(419) 00:10:15.023 fused_ordering(420) 00:10:15.023 fused_ordering(421) 00:10:15.023 fused_ordering(422) 00:10:15.023 fused_ordering(423) 00:10:15.023 fused_ordering(424) 00:10:15.023 fused_ordering(425) 00:10:15.023 fused_ordering(426) 00:10:15.023 fused_ordering(427) 00:10:15.023 fused_ordering(428) 00:10:15.023 fused_ordering(429) 00:10:15.023 fused_ordering(430) 00:10:15.023 fused_ordering(431) 00:10:15.023 fused_ordering(432) 00:10:15.023 fused_ordering(433) 00:10:15.023 fused_ordering(434) 00:10:15.024 fused_ordering(435) 00:10:15.024 fused_ordering(436) 00:10:15.024 fused_ordering(437) 00:10:15.024 fused_ordering(438) 00:10:15.024 fused_ordering(439) 00:10:15.024 fused_ordering(440) 00:10:15.024 fused_ordering(441) 00:10:15.024 fused_ordering(442) 00:10:15.024 fused_ordering(443) 00:10:15.024 fused_ordering(444) 00:10:15.024 fused_ordering(445) 00:10:15.024 fused_ordering(446) 00:10:15.024 fused_ordering(447) 00:10:15.024 fused_ordering(448) 00:10:15.024 fused_ordering(449) 00:10:15.024 fused_ordering(450) 00:10:15.024 fused_ordering(451) 00:10:15.024 fused_ordering(452) 00:10:15.024 fused_ordering(453) 00:10:15.024 fused_ordering(454) 00:10:15.024 fused_ordering(455) 00:10:15.024 fused_ordering(456) 00:10:15.024 fused_ordering(457) 00:10:15.024 fused_ordering(458) 00:10:15.024 fused_ordering(459) 00:10:15.024 fused_ordering(460) 00:10:15.024 fused_ordering(461) 00:10:15.024 fused_ordering(462) 00:10:15.024 fused_ordering(463) 00:10:15.024 fused_ordering(464) 00:10:15.024 fused_ordering(465) 00:10:15.024 fused_ordering(466) 00:10:15.024 fused_ordering(467) 00:10:15.024 fused_ordering(468) 00:10:15.024 fused_ordering(469) 00:10:15.024 fused_ordering(470) 00:10:15.024 fused_ordering(471) 00:10:15.024 fused_ordering(472) 00:10:15.024 fused_ordering(473) 00:10:15.024 fused_ordering(474) 00:10:15.024 fused_ordering(475) 00:10:15.024 fused_ordering(476) 00:10:15.024 fused_ordering(477) 00:10:15.024 fused_ordering(478) 00:10:15.024 fused_ordering(479) 00:10:15.024 fused_ordering(480) 00:10:15.024 fused_ordering(481) 00:10:15.024 fused_ordering(482) 00:10:15.024 fused_ordering(483) 00:10:15.024 fused_ordering(484) 00:10:15.024 fused_ordering(485) 00:10:15.024 fused_ordering(486) 00:10:15.024 fused_ordering(487) 00:10:15.024 fused_ordering(488) 00:10:15.024 fused_ordering(489) 00:10:15.024 fused_ordering(490) 00:10:15.024 fused_ordering(491) 00:10:15.024 fused_ordering(492) 00:10:15.024 fused_ordering(493) 00:10:15.024 fused_ordering(494) 00:10:15.024 fused_ordering(495) 00:10:15.024 fused_ordering(496) 00:10:15.024 fused_ordering(497) 00:10:15.024 fused_ordering(498) 00:10:15.024 fused_ordering(499) 00:10:15.024 fused_ordering(500) 00:10:15.024 fused_ordering(501) 00:10:15.024 fused_ordering(502) 00:10:15.024 fused_ordering(503) 00:10:15.024 fused_ordering(504) 00:10:15.024 fused_ordering(505) 00:10:15.024 fused_ordering(506) 00:10:15.024 fused_ordering(507) 00:10:15.024 fused_ordering(508) 00:10:15.024 fused_ordering(509) 00:10:15.024 fused_ordering(510) 00:10:15.024 fused_ordering(511) 00:10:15.024 fused_ordering(512) 00:10:15.024 fused_ordering(513) 00:10:15.024 fused_ordering(514) 00:10:15.024 fused_ordering(515) 00:10:15.024 fused_ordering(516) 00:10:15.024 fused_ordering(517) 00:10:15.024 fused_ordering(518) 00:10:15.024 fused_ordering(519) 00:10:15.024 fused_ordering(520) 00:10:15.024 fused_ordering(521) 00:10:15.024 fused_ordering(522) 00:10:15.024 fused_ordering(523) 00:10:15.024 fused_ordering(524) 00:10:15.024 fused_ordering(525) 00:10:15.024 fused_ordering(526) 00:10:15.024 fused_ordering(527) 00:10:15.024 fused_ordering(528) 00:10:15.024 fused_ordering(529) 00:10:15.024 fused_ordering(530) 00:10:15.024 fused_ordering(531) 00:10:15.024 fused_ordering(532) 00:10:15.024 fused_ordering(533) 00:10:15.024 fused_ordering(534) 00:10:15.024 fused_ordering(535) 00:10:15.024 fused_ordering(536) 00:10:15.024 fused_ordering(537) 00:10:15.024 fused_ordering(538) 00:10:15.024 fused_ordering(539) 00:10:15.024 fused_ordering(540) 00:10:15.024 fused_ordering(541) 00:10:15.024 fused_ordering(542) 00:10:15.024 fused_ordering(543) 00:10:15.024 fused_ordering(544) 00:10:15.024 fused_ordering(545) 00:10:15.024 fused_ordering(546) 00:10:15.024 fused_ordering(547) 00:10:15.024 fused_ordering(548) 00:10:15.024 fused_ordering(549) 00:10:15.024 fused_ordering(550) 00:10:15.024 fused_ordering(551) 00:10:15.024 fused_ordering(552) 00:10:15.024 fused_ordering(553) 00:10:15.024 fused_ordering(554) 00:10:15.024 fused_ordering(555) 00:10:15.024 fused_ordering(556) 00:10:15.024 fused_ordering(557) 00:10:15.024 fused_ordering(558) 00:10:15.024 fused_ordering(559) 00:10:15.024 fused_ordering(560) 00:10:15.024 fused_ordering(561) 00:10:15.024 fused_ordering(562) 00:10:15.024 fused_ordering(563) 00:10:15.024 fused_ordering(564) 00:10:15.024 fused_ordering(565) 00:10:15.024 fused_ordering(566) 00:10:15.024 fused_ordering(567) 00:10:15.024 fused_ordering(568) 00:10:15.024 fused_ordering(569) 00:10:15.024 fused_ordering(570) 00:10:15.024 fused_ordering(571) 00:10:15.024 fused_ordering(572) 00:10:15.024 fused_ordering(573) 00:10:15.024 fused_ordering(574) 00:10:15.024 fused_ordering(575) 00:10:15.024 fused_ordering(576) 00:10:15.024 fused_ordering(577) 00:10:15.024 fused_ordering(578) 00:10:15.024 fused_ordering(579) 00:10:15.024 fused_ordering(580) 00:10:15.024 fused_ordering(581) 00:10:15.024 fused_ordering(582) 00:10:15.024 fused_ordering(583) 00:10:15.024 fused_ordering(584) 00:10:15.024 fused_ordering(585) 00:10:15.024 fused_ordering(586) 00:10:15.024 fused_ordering(587) 00:10:15.024 fused_ordering(588) 00:10:15.024 fused_ordering(589) 00:10:15.024 fused_ordering(590) 00:10:15.024 fused_ordering(591) 00:10:15.024 fused_ordering(592) 00:10:15.024 fused_ordering(593) 00:10:15.024 fused_ordering(594) 00:10:15.024 fused_ordering(595) 00:10:15.024 fused_ordering(596) 00:10:15.024 fused_ordering(597) 00:10:15.024 fused_ordering(598) 00:10:15.024 fused_ordering(599) 00:10:15.024 fused_ordering(600) 00:10:15.024 fused_ordering(601) 00:10:15.024 fused_ordering(602) 00:10:15.024 fused_ordering(603) 00:10:15.024 fused_ordering(604) 00:10:15.024 fused_ordering(605) 00:10:15.024 fused_ordering(606) 00:10:15.024 fused_ordering(607) 00:10:15.024 fused_ordering(608) 00:10:15.024 fused_ordering(609) 00:10:15.024 fused_ordering(610) 00:10:15.024 fused_ordering(611) 00:10:15.024 fused_ordering(612) 00:10:15.024 fused_ordering(613) 00:10:15.024 fused_ordering(614) 00:10:15.024 fused_ordering(615) 00:10:15.590 fused_ordering(616) 00:10:15.590 fused_ordering(617) 00:10:15.590 fused_ordering(618) 00:10:15.590 fused_ordering(619) 00:10:15.590 fused_ordering(620) 00:10:15.590 fused_ordering(621) 00:10:15.590 fused_ordering(622) 00:10:15.590 fused_ordering(623) 00:10:15.590 fused_ordering(624) 00:10:15.590 fused_ordering(625) 00:10:15.590 fused_ordering(626) 00:10:15.590 fused_ordering(627) 00:10:15.590 fused_ordering(628) 00:10:15.590 fused_ordering(629) 00:10:15.590 fused_ordering(630) 00:10:15.590 fused_ordering(631) 00:10:15.590 fused_ordering(632) 00:10:15.590 fused_ordering(633) 00:10:15.590 fused_ordering(634) 00:10:15.590 fused_ordering(635) 00:10:15.590 fused_ordering(636) 00:10:15.590 fused_ordering(637) 00:10:15.590 fused_ordering(638) 00:10:15.590 fused_ordering(639) 00:10:15.590 fused_ordering(640) 00:10:15.590 fused_ordering(641) 00:10:15.590 fused_ordering(642) 00:10:15.590 fused_ordering(643) 00:10:15.590 fused_ordering(644) 00:10:15.590 fused_ordering(645) 00:10:15.590 fused_ordering(646) 00:10:15.590 fused_ordering(647) 00:10:15.590 fused_ordering(648) 00:10:15.590 fused_ordering(649) 00:10:15.590 fused_ordering(650) 00:10:15.590 fused_ordering(651) 00:10:15.590 fused_ordering(652) 00:10:15.590 fused_ordering(653) 00:10:15.590 fused_ordering(654) 00:10:15.590 fused_ordering(655) 00:10:15.590 fused_ordering(656) 00:10:15.590 fused_ordering(657) 00:10:15.590 fused_ordering(658) 00:10:15.590 fused_ordering(659) 00:10:15.590 fused_ordering(660) 00:10:15.590 fused_ordering(661) 00:10:15.590 fused_ordering(662) 00:10:15.590 fused_ordering(663) 00:10:15.590 fused_ordering(664) 00:10:15.590 fused_ordering(665) 00:10:15.590 fused_ordering(666) 00:10:15.590 fused_ordering(667) 00:10:15.590 fused_ordering(668) 00:10:15.590 fused_ordering(669) 00:10:15.590 fused_ordering(670) 00:10:15.590 fused_ordering(671) 00:10:15.590 fused_ordering(672) 00:10:15.590 fused_ordering(673) 00:10:15.590 fused_ordering(674) 00:10:15.590 fused_ordering(675) 00:10:15.590 fused_ordering(676) 00:10:15.590 fused_ordering(677) 00:10:15.590 fused_ordering(678) 00:10:15.590 fused_ordering(679) 00:10:15.590 fused_ordering(680) 00:10:15.591 fused_ordering(681) 00:10:15.591 fused_ordering(682) 00:10:15.591 fused_ordering(683) 00:10:15.591 fused_ordering(684) 00:10:15.591 fused_ordering(685) 00:10:15.591 fused_ordering(686) 00:10:15.591 fused_ordering(687) 00:10:15.591 fused_ordering(688) 00:10:15.591 fused_ordering(689) 00:10:15.591 fused_ordering(690) 00:10:15.591 fused_ordering(691) 00:10:15.591 fused_ordering(692) 00:10:15.591 fused_ordering(693) 00:10:15.591 fused_ordering(694) 00:10:15.591 fused_ordering(695) 00:10:15.591 fused_ordering(696) 00:10:15.591 fused_ordering(697) 00:10:15.591 fused_ordering(698) 00:10:15.591 fused_ordering(699) 00:10:15.591 fused_ordering(700) 00:10:15.591 fused_ordering(701) 00:10:15.591 fused_ordering(702) 00:10:15.591 fused_ordering(703) 00:10:15.591 fused_ordering(704) 00:10:15.591 fused_ordering(705) 00:10:15.591 fused_ordering(706) 00:10:15.591 fused_ordering(707) 00:10:15.591 fused_ordering(708) 00:10:15.591 fused_ordering(709) 00:10:15.591 fused_ordering(710) 00:10:15.591 fused_ordering(711) 00:10:15.591 fused_ordering(712) 00:10:15.591 fused_ordering(713) 00:10:15.591 fused_ordering(714) 00:10:15.591 fused_ordering(715) 00:10:15.591 fused_ordering(716) 00:10:15.591 fused_ordering(717) 00:10:15.591 fused_ordering(718) 00:10:15.591 fused_ordering(719) 00:10:15.591 fused_ordering(720) 00:10:15.591 fused_ordering(721) 00:10:15.591 fused_ordering(722) 00:10:15.591 fused_ordering(723) 00:10:15.591 fused_ordering(724) 00:10:15.591 fused_ordering(725) 00:10:15.591 fused_ordering(726) 00:10:15.591 fused_ordering(727) 00:10:15.591 fused_ordering(728) 00:10:15.591 fused_ordering(729) 00:10:15.591 fused_ordering(730) 00:10:15.591 fused_ordering(731) 00:10:15.591 fused_ordering(732) 00:10:15.591 fused_ordering(733) 00:10:15.591 fused_ordering(734) 00:10:15.591 fused_ordering(735) 00:10:15.591 fused_ordering(736) 00:10:15.591 fused_ordering(737) 00:10:15.591 fused_ordering(738) 00:10:15.591 fused_ordering(739) 00:10:15.591 fused_ordering(740) 00:10:15.591 fused_ordering(741) 00:10:15.591 fused_ordering(742) 00:10:15.591 fused_ordering(743) 00:10:15.591 fused_ordering(744) 00:10:15.591 fused_ordering(745) 00:10:15.591 fused_ordering(746) 00:10:15.591 fused_ordering(747) 00:10:15.591 fused_ordering(748) 00:10:15.591 fused_ordering(749) 00:10:15.591 fused_ordering(750) 00:10:15.591 fused_ordering(751) 00:10:15.591 fused_ordering(752) 00:10:15.591 fused_ordering(753) 00:10:15.591 fused_ordering(754) 00:10:15.591 fused_ordering(755) 00:10:15.591 fused_ordering(756) 00:10:15.591 fused_ordering(757) 00:10:15.591 fused_ordering(758) 00:10:15.591 fused_ordering(759) 00:10:15.591 fused_ordering(760) 00:10:15.591 fused_ordering(761) 00:10:15.591 fused_ordering(762) 00:10:15.591 fused_ordering(763) 00:10:15.591 fused_ordering(764) 00:10:15.591 fused_ordering(765) 00:10:15.591 fused_ordering(766) 00:10:15.591 fused_ordering(767) 00:10:15.591 fused_ordering(768) 00:10:15.591 fused_ordering(769) 00:10:15.591 fused_ordering(770) 00:10:15.591 fused_ordering(771) 00:10:15.591 fused_ordering(772) 00:10:15.591 fused_ordering(773) 00:10:15.591 fused_ordering(774) 00:10:15.591 fused_ordering(775) 00:10:15.591 fused_ordering(776) 00:10:15.591 fused_ordering(777) 00:10:15.591 fused_ordering(778) 00:10:15.591 fused_ordering(779) 00:10:15.591 fused_ordering(780) 00:10:15.591 fused_ordering(781) 00:10:15.591 fused_ordering(782) 00:10:15.591 fused_ordering(783) 00:10:15.591 fused_ordering(784) 00:10:15.591 fused_ordering(785) 00:10:15.591 fused_ordering(786) 00:10:15.591 fused_ordering(787) 00:10:15.591 fused_ordering(788) 00:10:15.591 fused_ordering(789) 00:10:15.591 fused_ordering(790) 00:10:15.591 fused_ordering(791) 00:10:15.591 fused_ordering(792) 00:10:15.591 fused_ordering(793) 00:10:15.591 fused_ordering(794) 00:10:15.591 fused_ordering(795) 00:10:15.591 fused_ordering(796) 00:10:15.591 fused_ordering(797) 00:10:15.591 fused_ordering(798) 00:10:15.591 fused_ordering(799) 00:10:15.591 fused_ordering(800) 00:10:15.591 fused_ordering(801) 00:10:15.591 fused_ordering(802) 00:10:15.591 fused_ordering(803) 00:10:15.591 fused_ordering(804) 00:10:15.591 fused_ordering(805) 00:10:15.591 fused_ordering(806) 00:10:15.591 fused_ordering(807) 00:10:15.591 fused_ordering(808) 00:10:15.591 fused_ordering(809) 00:10:15.591 fused_ordering(810) 00:10:15.591 fused_ordering(811) 00:10:15.591 fused_ordering(812) 00:10:15.591 fused_ordering(813) 00:10:15.591 fused_ordering(814) 00:10:15.591 fused_ordering(815) 00:10:15.591 fused_ordering(816) 00:10:15.591 fused_ordering(817) 00:10:15.591 fused_ordering(818) 00:10:15.591 fused_ordering(819) 00:10:15.591 fused_ordering(820) 00:10:16.159 fused_ordering(821) 00:10:16.159 fused_ordering(822) 00:10:16.159 fused_ordering(823) 00:10:16.159 fused_ordering(824) 00:10:16.159 fused_ordering(825) 00:10:16.159 fused_ordering(826) 00:10:16.159 fused_ordering(827) 00:10:16.159 fused_ordering(828) 00:10:16.159 fused_ordering(829) 00:10:16.159 fused_ordering(830) 00:10:16.159 fused_ordering(831) 00:10:16.159 fused_ordering(832) 00:10:16.159 fused_ordering(833) 00:10:16.159 fused_ordering(834) 00:10:16.159 fused_ordering(835) 00:10:16.159 fused_ordering(836) 00:10:16.159 fused_ordering(837) 00:10:16.159 fused_ordering(838) 00:10:16.159 fused_ordering(839) 00:10:16.159 fused_ordering(840) 00:10:16.159 fused_ordering(841) 00:10:16.159 fused_ordering(842) 00:10:16.159 fused_ordering(843) 00:10:16.159 fused_ordering(844) 00:10:16.159 fused_ordering(845) 00:10:16.159 fused_ordering(846) 00:10:16.159 fused_ordering(847) 00:10:16.159 fused_ordering(848) 00:10:16.159 fused_ordering(849) 00:10:16.159 fused_ordering(850) 00:10:16.159 fused_ordering(851) 00:10:16.159 fused_ordering(852) 00:10:16.159 fused_ordering(853) 00:10:16.159 fused_ordering(854) 00:10:16.159 fused_ordering(855) 00:10:16.159 fused_ordering(856) 00:10:16.159 fused_ordering(857) 00:10:16.159 fused_ordering(858) 00:10:16.159 fused_ordering(859) 00:10:16.159 fused_ordering(860) 00:10:16.159 fused_ordering(861) 00:10:16.159 fused_ordering(862) 00:10:16.159 fused_ordering(863) 00:10:16.159 fused_ordering(864) 00:10:16.159 fused_ordering(865) 00:10:16.159 fused_ordering(866) 00:10:16.159 fused_ordering(867) 00:10:16.159 fused_ordering(868) 00:10:16.159 fused_ordering(869) 00:10:16.159 fused_ordering(870) 00:10:16.159 fused_ordering(871) 00:10:16.159 fused_ordering(872) 00:10:16.159 fused_ordering(873) 00:10:16.159 fused_ordering(874) 00:10:16.159 fused_ordering(875) 00:10:16.159 fused_ordering(876) 00:10:16.159 fused_ordering(877) 00:10:16.159 fused_ordering(878) 00:10:16.159 fused_ordering(879) 00:10:16.159 fused_ordering(880) 00:10:16.159 fused_ordering(881) 00:10:16.159 fused_ordering(882) 00:10:16.159 fused_ordering(883) 00:10:16.159 fused_ordering(884) 00:10:16.159 fused_ordering(885) 00:10:16.159 fused_ordering(886) 00:10:16.159 fused_ordering(887) 00:10:16.159 fused_ordering(888) 00:10:16.159 fused_ordering(889) 00:10:16.159 fused_ordering(890) 00:10:16.159 fused_ordering(891) 00:10:16.159 fused_ordering(892) 00:10:16.159 fused_ordering(893) 00:10:16.159 fused_ordering(894) 00:10:16.159 fused_ordering(895) 00:10:16.159 fused_ordering(896) 00:10:16.159 fused_ordering(897) 00:10:16.159 fused_ordering(898) 00:10:16.159 fused_ordering(899) 00:10:16.159 fused_ordering(900) 00:10:16.159 fused_ordering(901) 00:10:16.159 fused_ordering(902) 00:10:16.159 fused_ordering(903) 00:10:16.159 fused_ordering(904) 00:10:16.159 fused_ordering(905) 00:10:16.159 fused_ordering(906) 00:10:16.159 fused_ordering(907) 00:10:16.159 fused_ordering(908) 00:10:16.159 fused_ordering(909) 00:10:16.159 fused_ordering(910) 00:10:16.159 fused_ordering(911) 00:10:16.159 fused_ordering(912) 00:10:16.159 fused_ordering(913) 00:10:16.159 fused_ordering(914) 00:10:16.159 fused_ordering(915) 00:10:16.159 fused_ordering(916) 00:10:16.159 fused_ordering(917) 00:10:16.159 fused_ordering(918) 00:10:16.159 fused_ordering(919) 00:10:16.159 fused_ordering(920) 00:10:16.159 fused_ordering(921) 00:10:16.159 fused_ordering(922) 00:10:16.159 fused_ordering(923) 00:10:16.159 fused_ordering(924) 00:10:16.159 fused_ordering(925) 00:10:16.159 fused_ordering(926) 00:10:16.159 fused_ordering(927) 00:10:16.159 fused_ordering(928) 00:10:16.159 fused_ordering(929) 00:10:16.159 fused_ordering(930) 00:10:16.159 fused_ordering(931) 00:10:16.159 fused_ordering(932) 00:10:16.159 fused_ordering(933) 00:10:16.159 fused_ordering(934) 00:10:16.159 fused_ordering(935) 00:10:16.159 fused_ordering(936) 00:10:16.159 fused_ordering(937) 00:10:16.159 fused_ordering(938) 00:10:16.159 fused_ordering(939) 00:10:16.159 fused_ordering(940) 00:10:16.159 fused_ordering(941) 00:10:16.159 fused_ordering(942) 00:10:16.159 fused_ordering(943) 00:10:16.159 fused_ordering(944) 00:10:16.159 fused_ordering(945) 00:10:16.159 fused_ordering(946) 00:10:16.159 fused_ordering(947) 00:10:16.159 fused_ordering(948) 00:10:16.159 fused_ordering(949) 00:10:16.159 fused_ordering(950) 00:10:16.159 fused_ordering(951) 00:10:16.159 fused_ordering(952) 00:10:16.159 fused_ordering(953) 00:10:16.159 fused_ordering(954) 00:10:16.159 fused_ordering(955) 00:10:16.159 fused_ordering(956) 00:10:16.159 fused_ordering(957) 00:10:16.159 fused_ordering(958) 00:10:16.159 fused_ordering(959) 00:10:16.159 fused_ordering(960) 00:10:16.159 fused_ordering(961) 00:10:16.159 fused_ordering(962) 00:10:16.159 fused_ordering(963) 00:10:16.159 fused_ordering(964) 00:10:16.159 fused_ordering(965) 00:10:16.159 fused_ordering(966) 00:10:16.159 fused_ordering(967) 00:10:16.159 fused_ordering(968) 00:10:16.159 fused_ordering(969) 00:10:16.159 fused_ordering(970) 00:10:16.159 fused_ordering(971) 00:10:16.159 fused_ordering(972) 00:10:16.159 fused_ordering(973) 00:10:16.159 fused_ordering(974) 00:10:16.159 fused_ordering(975) 00:10:16.159 fused_ordering(976) 00:10:16.159 fused_ordering(977) 00:10:16.159 fused_ordering(978) 00:10:16.159 fused_ordering(979) 00:10:16.159 fused_ordering(980) 00:10:16.159 fused_ordering(981) 00:10:16.159 fused_ordering(982) 00:10:16.159 fused_ordering(983) 00:10:16.159 fused_ordering(984) 00:10:16.159 fused_ordering(985) 00:10:16.159 fused_ordering(986) 00:10:16.159 fused_ordering(987) 00:10:16.159 fused_ordering(988) 00:10:16.159 fused_ordering(989) 00:10:16.159 fused_ordering(990) 00:10:16.159 fused_ordering(991) 00:10:16.159 fused_ordering(992) 00:10:16.159 fused_ordering(993) 00:10:16.159 fused_ordering(994) 00:10:16.159 fused_ordering(995) 00:10:16.159 fused_ordering(996) 00:10:16.159 fused_ordering(997) 00:10:16.159 fused_ordering(998) 00:10:16.159 fused_ordering(999) 00:10:16.159 fused_ordering(1000) 00:10:16.159 fused_ordering(1001) 00:10:16.159 fused_ordering(1002) 00:10:16.159 fused_ordering(1003) 00:10:16.159 fused_ordering(1004) 00:10:16.159 fused_ordering(1005) 00:10:16.159 fused_ordering(1006) 00:10:16.159 fused_ordering(1007) 00:10:16.159 fused_ordering(1008) 00:10:16.159 fused_ordering(1009) 00:10:16.159 fused_ordering(1010) 00:10:16.159 fused_ordering(1011) 00:10:16.159 fused_ordering(1012) 00:10:16.159 fused_ordering(1013) 00:10:16.159 fused_ordering(1014) 00:10:16.159 fused_ordering(1015) 00:10:16.159 fused_ordering(1016) 00:10:16.159 fused_ordering(1017) 00:10:16.159 fused_ordering(1018) 00:10:16.159 fused_ordering(1019) 00:10:16.159 fused_ordering(1020) 00:10:16.159 fused_ordering(1021) 00:10:16.159 fused_ordering(1022) 00:10:16.159 fused_ordering(1023) 00:10:16.159 16:21:49 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:10:16.159 16:21:49 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:10:16.159 16:21:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:16.159 16:21:49 -- nvmf/common.sh@117 -- # sync 00:10:16.159 16:21:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:16.159 16:21:49 -- nvmf/common.sh@120 -- # set +e 00:10:16.159 16:21:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:16.159 16:21:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:16.159 rmmod nvme_tcp 00:10:16.159 rmmod nvme_fabrics 00:10:16.159 rmmod nvme_keyring 00:10:16.159 16:21:50 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:16.159 16:21:50 -- nvmf/common.sh@124 -- # set -e 00:10:16.159 16:21:50 -- nvmf/common.sh@125 -- # return 0 00:10:16.159 16:21:50 -- nvmf/common.sh@478 -- # '[' -n 70106 ']' 00:10:16.159 16:21:50 -- nvmf/common.sh@479 -- # killprocess 70106 00:10:16.159 16:21:50 -- common/autotest_common.sh@936 -- # '[' -z 70106 ']' 00:10:16.159 16:21:50 -- common/autotest_common.sh@940 -- # kill -0 70106 00:10:16.159 16:21:50 -- common/autotest_common.sh@941 -- # uname 00:10:16.159 16:21:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:16.159 16:21:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70106 00:10:16.159 16:21:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:16.159 16:21:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:16.159 16:21:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70106' 00:10:16.159 killing process with pid 70106 00:10:16.159 16:21:50 -- common/autotest_common.sh@955 -- # kill 70106 00:10:16.159 16:21:50 -- common/autotest_common.sh@960 -- # wait 70106 00:10:16.418 16:21:50 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:16.418 16:21:50 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:16.418 16:21:50 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:16.418 16:21:50 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:16.418 16:21:50 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:16.418 16:21:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.418 16:21:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:16.418 16:21:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.418 16:21:50 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:16.418 ************************************ 00:10:16.418 END TEST nvmf_fused_ordering 00:10:16.418 ************************************ 00:10:16.418 00:10:16.418 real 0m4.103s 00:10:16.418 user 0m4.954s 00:10:16.418 sys 0m1.357s 00:10:16.418 16:21:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:16.418 16:21:50 -- common/autotest_common.sh@10 -- # set +x 00:10:16.418 16:21:50 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:16.418 16:21:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:16.418 16:21:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:16.418 16:21:50 -- common/autotest_common.sh@10 -- # set +x 00:10:16.418 ************************************ 00:10:16.418 START TEST nvmf_delete_subsystem 00:10:16.418 ************************************ 00:10:16.418 16:21:50 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:16.677 * Looking for test storage... 00:10:16.678 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:16.678 16:21:50 -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:16.678 16:21:50 -- nvmf/common.sh@7 -- # uname -s 00:10:16.678 16:21:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:16.678 16:21:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:16.678 16:21:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:16.678 16:21:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:16.678 16:21:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:16.678 16:21:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:16.678 16:21:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:16.678 16:21:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:16.678 16:21:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:16.678 16:21:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:16.678 16:21:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:10:16.678 16:21:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:10:16.678 16:21:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:16.678 16:21:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:16.678 16:21:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:16.678 16:21:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:16.678 16:21:50 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:16.678 16:21:50 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:16.678 16:21:50 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:16.678 16:21:50 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:16.678 16:21:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.678 16:21:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.678 16:21:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.678 16:21:50 -- paths/export.sh@5 -- # export PATH 00:10:16.678 16:21:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.678 16:21:50 -- nvmf/common.sh@47 -- # : 0 00:10:16.678 16:21:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:16.678 16:21:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:16.678 16:21:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:16.678 16:21:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:16.678 16:21:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:16.678 16:21:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:16.678 16:21:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:16.678 16:21:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:16.678 16:21:50 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:10:16.678 16:21:50 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:16.678 16:21:50 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:16.678 16:21:50 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:16.678 16:21:50 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:16.678 16:21:50 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:16.678 16:21:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.678 16:21:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:16.678 16:21:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.678 16:21:50 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:10:16.678 16:21:50 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:10:16.678 16:21:50 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:10:16.678 16:21:50 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:10:16.678 16:21:50 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:10:16.678 16:21:50 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:10:16.678 16:21:50 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:16.678 16:21:50 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:16.678 16:21:50 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:16.678 16:21:50 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:16.678 16:21:50 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:16.678 16:21:50 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:16.678 16:21:50 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:16.678 16:21:50 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:16.678 16:21:50 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:16.678 16:21:50 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:16.678 16:21:50 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:16.678 16:21:50 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:16.678 16:21:50 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:16.678 16:21:50 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:16.678 Cannot find device "nvmf_tgt_br" 00:10:16.678 16:21:50 -- nvmf/common.sh@155 -- # true 00:10:16.678 16:21:50 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:16.678 Cannot find device "nvmf_tgt_br2" 00:10:16.678 16:21:50 -- nvmf/common.sh@156 -- # true 00:10:16.678 16:21:50 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:16.678 16:21:50 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:16.678 Cannot find device "nvmf_tgt_br" 00:10:16.678 16:21:50 -- nvmf/common.sh@158 -- # true 00:10:16.678 16:21:50 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:16.678 Cannot find device "nvmf_tgt_br2" 00:10:16.678 16:21:50 -- nvmf/common.sh@159 -- # true 00:10:16.678 16:21:50 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:16.678 16:21:50 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:16.678 16:21:50 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:16.678 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:16.678 16:21:50 -- nvmf/common.sh@162 -- # true 00:10:16.678 16:21:50 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:16.678 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:16.678 16:21:50 -- nvmf/common.sh@163 -- # true 00:10:16.678 16:21:50 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:16.678 16:21:50 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:16.678 16:21:50 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:16.937 16:21:50 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:16.937 16:21:50 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:16.937 16:21:50 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:16.937 16:21:50 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:16.937 16:21:50 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:16.937 16:21:50 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:16.937 16:21:50 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:16.937 16:21:50 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:16.937 16:21:50 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:16.937 16:21:50 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:16.937 16:21:50 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:16.937 16:21:50 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:16.937 16:21:50 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:16.937 16:21:50 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:16.937 16:21:50 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:16.937 16:21:50 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:16.937 16:21:50 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:16.937 16:21:50 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:16.937 16:21:50 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:16.937 16:21:50 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:16.937 16:21:50 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:16.937 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:16.937 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:10:16.937 00:10:16.937 --- 10.0.0.2 ping statistics --- 00:10:16.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.937 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:10:16.937 16:21:50 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:16.937 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:16.937 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:10:16.937 00:10:16.937 --- 10.0.0.3 ping statistics --- 00:10:16.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.937 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:10:16.937 16:21:50 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:16.937 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:16.937 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:10:16.937 00:10:16.937 --- 10.0.0.1 ping statistics --- 00:10:16.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.937 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:10:16.937 16:21:50 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:16.937 16:21:50 -- nvmf/common.sh@422 -- # return 0 00:10:16.937 16:21:50 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:16.937 16:21:50 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:16.937 16:21:50 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:16.937 16:21:50 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:16.937 16:21:50 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:16.937 16:21:50 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:16.937 16:21:50 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:16.937 16:21:50 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:10:16.937 16:21:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:16.937 16:21:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:16.937 16:21:50 -- common/autotest_common.sh@10 -- # set +x 00:10:16.937 16:21:50 -- nvmf/common.sh@470 -- # nvmfpid=70373 00:10:16.937 16:21:50 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:10:16.937 16:21:50 -- nvmf/common.sh@471 -- # waitforlisten 70373 00:10:16.937 16:21:50 -- common/autotest_common.sh@817 -- # '[' -z 70373 ']' 00:10:16.937 16:21:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.937 16:21:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:16.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.937 16:21:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.937 16:21:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:16.937 16:21:50 -- common/autotest_common.sh@10 -- # set +x 00:10:17.195 [2024-04-17 16:21:50.995368] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:10:17.195 [2024-04-17 16:21:50.995498] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:17.195 [2024-04-17 16:21:51.140705] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:17.453 [2024-04-17 16:21:51.277541] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:17.453 [2024-04-17 16:21:51.277609] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:17.453 [2024-04-17 16:21:51.277624] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:17.453 [2024-04-17 16:21:51.277635] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:17.453 [2024-04-17 16:21:51.277644] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:17.453 [2024-04-17 16:21:51.277762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:17.453 [2024-04-17 16:21:51.278027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.020 16:21:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:18.020 16:21:51 -- common/autotest_common.sh@850 -- # return 0 00:10:18.020 16:21:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:18.020 16:21:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:18.020 16:21:51 -- common/autotest_common.sh@10 -- # set +x 00:10:18.020 16:21:52 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:18.020 16:21:52 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:18.020 16:21:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:18.020 16:21:52 -- common/autotest_common.sh@10 -- # set +x 00:10:18.020 [2024-04-17 16:21:52.022891] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:18.020 16:21:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:18.020 16:21:52 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:18.020 16:21:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:18.020 16:21:52 -- common/autotest_common.sh@10 -- # set +x 00:10:18.020 16:21:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:18.020 16:21:52 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:18.020 16:21:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:18.020 16:21:52 -- common/autotest_common.sh@10 -- # set +x 00:10:18.020 [2024-04-17 16:21:52.039180] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:18.020 16:21:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:18.020 16:21:52 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:18.020 16:21:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:18.020 16:21:52 -- common/autotest_common.sh@10 -- # set +x 00:10:18.020 NULL1 00:10:18.020 16:21:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:18.020 16:21:52 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:18.020 16:21:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:18.020 16:21:52 -- common/autotest_common.sh@10 -- # set +x 00:10:18.020 Delay0 00:10:18.020 16:21:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:18.020 16:21:52 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.020 16:21:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:18.020 16:21:52 -- common/autotest_common.sh@10 -- # set +x 00:10:18.278 16:21:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:18.278 16:21:52 -- target/delete_subsystem.sh@28 -- # perf_pid=70424 00:10:18.278 16:21:52 -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:18.278 16:21:52 -- target/delete_subsystem.sh@30 -- # sleep 2 00:10:18.278 [2024-04-17 16:21:52.233687] subsystem.c:1431:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:20.208 16:21:54 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:20.208 16:21:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:20.208 16:21:54 -- common/autotest_common.sh@10 -- # set +x 00:10:20.471 Write completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 starting I/O failed: -6 00:10:20.471 Write completed with error (sct=0, sc=8) 00:10:20.471 Write completed with error (sct=0, sc=8) 00:10:20.471 Write completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 starting I/O failed: -6 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 starting I/O failed: -6 00:10:20.471 Write completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 starting I/O failed: -6 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Write completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Write completed with error (sct=0, sc=8) 00:10:20.471 starting I/O failed: -6 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 starting I/O failed: -6 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 starting I/O failed: -6 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Write completed with error (sct=0, sc=8) 00:10:20.471 Write completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 starting I/O failed: -6 00:10:20.471 Write completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Write completed with error (sct=0, sc=8) 00:10:20.471 starting I/O failed: -6 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Write completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 starting I/O failed: -6 00:10:20.471 Write completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Write completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 starting I/O failed: -6 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Write completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Write completed with error (sct=0, sc=8) 00:10:20.471 starting I/O failed: -6 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 starting I/O failed: -6 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 starting I/O failed: -6 00:10:20.471 Write completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 starting I/O failed: -6 00:10:20.471 Write completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 starting I/O failed: -6 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 starting I/O failed: -6 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 starting I/O failed: -6 00:10:20.471 Write completed with error (sct=0, sc=8) 00:10:20.471 Write completed with error (sct=0, sc=8) 00:10:20.471 starting I/O failed: -6 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 starting I/O failed: -6 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 starting I/O failed: -6 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 starting I/O failed: -6 00:10:20.471 Write completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 starting I/O failed: -6 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 starting I/O failed: -6 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Write completed with error (sct=0, sc=8) 00:10:20.471 starting I/O failed: -6 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 starting I/O failed: -6 00:10:20.471 Write completed with error (sct=0, sc=8) 00:10:20.471 Write completed with error (sct=0, sc=8) 00:10:20.471 starting I/O failed: -6 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Write completed with error (sct=0, sc=8) 00:10:20.471 starting I/O failed: -6 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 starting I/O failed: -6 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Write completed with error (sct=0, sc=8) 00:10:20.471 starting I/O failed: -6 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Write completed with error (sct=0, sc=8) 00:10:20.471 starting I/O failed: -6 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 starting I/O failed: -6 00:10:20.471 Write completed with error (sct=0, sc=8) 00:10:20.471 Write completed with error (sct=0, sc=8) 00:10:20.471 starting I/O failed: -6 00:10:20.471 Write completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 starting I/O failed: -6 00:10:20.471 Write completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 starting I/O failed: -6 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 starting I/O failed: -6 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Write completed with error (sct=0, sc=8) 00:10:20.471 starting I/O failed: -6 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 starting I/O failed: -6 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 starting I/O failed: -6 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 starting I/O failed: -6 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 starting I/O failed: -6 00:10:20.471 starting I/O failed: -6 00:10:20.471 starting I/O failed: -6 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Write completed with error (sct=0, sc=8) 00:10:20.471 starting I/O failed: -6 00:10:20.471 Write completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 starting I/O failed: -6 00:10:20.471 Write completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Write completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 starting I/O failed: -6 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Write completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 starting I/O failed: -6 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.471 Read completed with error (sct=0, sc=8) 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 starting I/O failed: -6 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 starting I/O failed: -6 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 Write completed with error (sct=0, sc=8) 00:10:20.472 Write completed with error (sct=0, sc=8) 00:10:20.472 Write completed with error (sct=0, sc=8) 00:10:20.472 starting I/O failed: -6 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 starting I/O failed: -6 00:10:20.472 Write completed with error (sct=0, sc=8) 00:10:20.472 Write completed with error (sct=0, sc=8) 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 starting I/O failed: -6 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 Write completed with error (sct=0, sc=8) 00:10:20.472 starting I/O failed: -6 00:10:20.472 Write completed with error (sct=0, sc=8) 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 starting I/O failed: -6 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 starting I/O failed: -6 00:10:20.472 Write completed with error (sct=0, sc=8) 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 starting I/O failed: -6 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 starting I/O failed: -6 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 Write completed with error (sct=0, sc=8) 00:10:20.472 starting I/O failed: -6 00:10:20.472 Write completed with error (sct=0, sc=8) 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 starting I/O failed: -6 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 Write completed with error (sct=0, sc=8) 00:10:20.472 starting I/O failed: -6 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 Write completed with error (sct=0, sc=8) 00:10:20.472 starting I/O failed: -6 00:10:20.472 Write completed with error (sct=0, sc=8) 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 starting I/O failed: -6 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 starting I/O failed: -6 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 starting I/O failed: -6 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 Write completed with error (sct=0, sc=8) 00:10:20.472 starting I/O failed: -6 00:10:20.472 Write completed with error (sct=0, sc=8) 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 starting I/O failed: -6 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 starting I/O failed: -6 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 Write completed with error (sct=0, sc=8) 00:10:20.472 starting I/O failed: -6 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 Write completed with error (sct=0, sc=8) 00:10:20.472 starting I/O failed: -6 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 starting I/O failed: -6 00:10:20.472 Write completed with error (sct=0, sc=8) 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 starting I/O failed: -6 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 Write completed with error (sct=0, sc=8) 00:10:20.472 starting I/O failed: -6 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 starting I/O failed: -6 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 starting I/O failed: -6 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 starting I/O failed: -6 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 starting I/O failed: -6 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 starting I/O failed: -6 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 starting I/O failed: -6 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 Write completed with error (sct=0, sc=8) 00:10:20.472 starting I/O failed: -6 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 Read completed with error (sct=0, sc=8) 00:10:20.472 starting I/O failed: -6 00:10:20.472 Write completed with error (sct=0, sc=8) 00:10:21.404 [2024-04-17 16:21:55.250595] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1c710 is same with the state(5) to be set 00:10:21.404 Read completed with error (sct=0, sc=8) 00:10:21.404 Read completed with error (sct=0, sc=8) 00:10:21.404 Read completed with error (sct=0, sc=8) 00:10:21.404 Read completed with error (sct=0, sc=8) 00:10:21.404 Read completed with error (sct=0, sc=8) 00:10:21.404 Read completed with error (sct=0, sc=8) 00:10:21.404 Read completed with error (sct=0, sc=8) 00:10:21.404 Read completed with error (sct=0, sc=8) 00:10:21.404 Write completed with error (sct=0, sc=8) 00:10:21.404 Write completed with error (sct=0, sc=8) 00:10:21.404 Read completed with error (sct=0, sc=8) 00:10:21.404 Read completed with error (sct=0, sc=8) 00:10:21.404 16:21:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:21.404 16:21:55 -- target/delete_subsystem.sh@34 -- # delay=0 00:10:21.404 16:21:55 -- target/delete_subsystem.sh@35 -- # kill -0 70424 00:10:21.404 16:21:55 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:21.404 Read completed with error (sct=0, sc=8) 00:10:21.404 Write completed with error (sct=0, sc=8) 00:10:21.404 Read completed with error (sct=0, sc=8) 00:10:21.404 Write completed with error (sct=0, sc=8) 00:10:21.404 Read completed with error (sct=0, sc=8) 00:10:21.404 Write completed with error (sct=0, sc=8) 00:10:21.404 Read completed with error (sct=0, sc=8) 00:10:21.404 Write completed with error (sct=0, sc=8) 00:10:21.404 Read completed with error (sct=0, sc=8) 00:10:21.404 Read completed with error (sct=0, sc=8) 00:10:21.404 Read completed with error (sct=0, sc=8) 00:10:21.404 Read completed with error (sct=0, sc=8) 00:10:21.404 Read completed with error (sct=0, sc=8) 00:10:21.404 Write completed with error (sct=0, sc=8) 00:10:21.404 Read completed with error (sct=0, sc=8) 00:10:21.404 Write completed with error (sct=0, sc=8) 00:10:21.404 Read completed with error (sct=0, sc=8) 00:10:21.404 Write completed with error (sct=0, sc=8) 00:10:21.404 Read completed with error (sct=0, sc=8) 00:10:21.404 Write completed with error (sct=0, sc=8) 00:10:21.404 Read completed with error (sct=0, sc=8) 00:10:21.404 Write completed with error (sct=0, sc=8) 00:10:21.404 Read completed with error (sct=0, sc=8) 00:10:21.404 Read completed with error (sct=0, sc=8) 00:10:21.404 Read completed with error (sct=0, sc=8) 00:10:21.404 Write completed with error (sct=0, sc=8) 00:10:21.404 Read completed with error (sct=0, sc=8) 00:10:21.404 Write completed with error (sct=0, sc=8) 00:10:21.404 Read completed with error (sct=0, sc=8) 00:10:21.404 [2024-04-17 16:21:55.278983] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffbb000bf90 is same with the state(5) to be set 00:10:21.404 Write completed with error (sct=0, sc=8) 00:10:21.404 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Write completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Write completed with error (sct=0, sc=8) 00:10:21.405 Write completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Write completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Write completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Write completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Write completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Write completed with error (sct=0, sc=8) 00:10:21.405 Write completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Write completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 [2024-04-17 16:21:55.279344] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fcdf0 is same with the state(5) to be set 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Write completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Write completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Write completed with error (sct=0, sc=8) 00:10:21.405 Write completed with error (sct=0, sc=8) 00:10:21.405 Write completed with error (sct=0, sc=8) 00:10:21.405 Write completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Write completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Write completed with error (sct=0, sc=8) 00:10:21.405 Write completed with error (sct=0, sc=8) 00:10:21.405 Write completed with error (sct=0, sc=8) 00:10:21.405 Write completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Write completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Write completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Write completed with error (sct=0, sc=8) 00:10:21.405 [2024-04-17 16:21:55.279622] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1bea0 is same with the state(5) to be set 00:10:21.405 Write completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Write completed with error (sct=0, sc=8) 00:10:21.405 Write completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Write completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Write completed with error (sct=0, sc=8) 00:10:21.405 Write completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Write completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 Read completed with error (sct=0, sc=8) 00:10:21.405 [2024-04-17 16:21:55.281273] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffbb000c510 is same with the state(5) to be set 00:10:21.405 [2024-04-17 16:21:55.282141] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1c710 (9): Bad file descriptor 00:10:21.405 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:10:21.405 Initializing NVMe Controllers 00:10:21.405 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:21.405 Controller IO queue size 128, less than required. 00:10:21.405 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:21.405 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:21.405 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:21.405 Initialization complete. Launching workers. 00:10:21.405 ======================================================== 00:10:21.405 Latency(us) 00:10:21.405 Device Information : IOPS MiB/s Average min max 00:10:21.405 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 186.66 0.09 903117.10 529.72 1018611.66 00:10:21.405 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 177.23 0.09 951146.65 421.75 2010770.01 00:10:21.405 ======================================================== 00:10:21.405 Total : 363.89 0.18 926509.39 421.75 2010770.01 00:10:21.405 00:10:21.970 16:21:55 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:21.970 16:21:55 -- target/delete_subsystem.sh@35 -- # kill -0 70424 00:10:21.970 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (70424) - No such process 00:10:21.970 16:21:55 -- target/delete_subsystem.sh@45 -- # NOT wait 70424 00:10:21.970 16:21:55 -- common/autotest_common.sh@638 -- # local es=0 00:10:21.970 16:21:55 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 70424 00:10:21.970 16:21:55 -- common/autotest_common.sh@626 -- # local arg=wait 00:10:21.970 16:21:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:21.970 16:21:55 -- common/autotest_common.sh@630 -- # type -t wait 00:10:21.970 16:21:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:21.970 16:21:55 -- common/autotest_common.sh@641 -- # wait 70424 00:10:21.970 16:21:55 -- common/autotest_common.sh@641 -- # es=1 00:10:21.970 16:21:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:21.970 16:21:55 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:21.970 16:21:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:21.970 16:21:55 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:21.970 16:21:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:21.970 16:21:55 -- common/autotest_common.sh@10 -- # set +x 00:10:21.970 16:21:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:21.970 16:21:55 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:21.970 16:21:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:21.970 16:21:55 -- common/autotest_common.sh@10 -- # set +x 00:10:21.970 [2024-04-17 16:21:55.796449] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:21.970 16:21:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:21.970 16:21:55 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.970 16:21:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:21.970 16:21:55 -- common/autotest_common.sh@10 -- # set +x 00:10:21.970 16:21:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:21.970 16:21:55 -- target/delete_subsystem.sh@54 -- # perf_pid=70474 00:10:21.970 16:21:55 -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:21.970 16:21:55 -- target/delete_subsystem.sh@56 -- # delay=0 00:10:21.970 16:21:55 -- target/delete_subsystem.sh@57 -- # kill -0 70474 00:10:21.970 16:21:55 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:21.970 [2024-04-17 16:21:55.980919] subsystem.c:1431:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:22.536 16:21:56 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:22.536 16:21:56 -- target/delete_subsystem.sh@57 -- # kill -0 70474 00:10:22.536 16:21:56 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:22.794 16:21:56 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:22.794 16:21:56 -- target/delete_subsystem.sh@57 -- # kill -0 70474 00:10:22.794 16:21:56 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:23.360 16:21:57 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:23.360 16:21:57 -- target/delete_subsystem.sh@57 -- # kill -0 70474 00:10:23.360 16:21:57 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:23.926 16:21:57 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:23.926 16:21:57 -- target/delete_subsystem.sh@57 -- # kill -0 70474 00:10:23.926 16:21:57 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:24.493 16:21:58 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:24.494 16:21:58 -- target/delete_subsystem.sh@57 -- # kill -0 70474 00:10:24.494 16:21:58 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:25.061 16:21:58 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:25.061 16:21:58 -- target/delete_subsystem.sh@57 -- # kill -0 70474 00:10:25.061 16:21:58 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:25.061 Initializing NVMe Controllers 00:10:25.061 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:25.061 Controller IO queue size 128, less than required. 00:10:25.061 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:25.061 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:25.061 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:25.061 Initialization complete. Launching workers. 00:10:25.061 ======================================================== 00:10:25.061 Latency(us) 00:10:25.061 Device Information : IOPS MiB/s Average min max 00:10:25.061 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005746.29 1000215.35 1041408.07 00:10:25.061 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003926.68 1000126.28 1042254.26 00:10:25.061 ======================================================== 00:10:25.061 Total : 256.00 0.12 1004836.49 1000126.28 1042254.26 00:10:25.061 00:10:25.320 16:21:59 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:25.320 16:21:59 -- target/delete_subsystem.sh@57 -- # kill -0 70474 00:10:25.320 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (70474) - No such process 00:10:25.320 16:21:59 -- target/delete_subsystem.sh@67 -- # wait 70474 00:10:25.320 16:21:59 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:25.320 16:21:59 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:10:25.320 16:21:59 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:25.320 16:21:59 -- nvmf/common.sh@117 -- # sync 00:10:25.578 16:21:59 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:25.578 16:21:59 -- nvmf/common.sh@120 -- # set +e 00:10:25.578 16:21:59 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:25.578 16:21:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:25.578 rmmod nvme_tcp 00:10:25.578 rmmod nvme_fabrics 00:10:25.578 rmmod nvme_keyring 00:10:25.578 16:21:59 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:25.578 16:21:59 -- nvmf/common.sh@124 -- # set -e 00:10:25.578 16:21:59 -- nvmf/common.sh@125 -- # return 0 00:10:25.578 16:21:59 -- nvmf/common.sh@478 -- # '[' -n 70373 ']' 00:10:25.578 16:21:59 -- nvmf/common.sh@479 -- # killprocess 70373 00:10:25.578 16:21:59 -- common/autotest_common.sh@936 -- # '[' -z 70373 ']' 00:10:25.578 16:21:59 -- common/autotest_common.sh@940 -- # kill -0 70373 00:10:25.578 16:21:59 -- common/autotest_common.sh@941 -- # uname 00:10:25.578 16:21:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:25.578 16:21:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70373 00:10:25.579 16:21:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:25.579 killing process with pid 70373 00:10:25.579 16:21:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:25.579 16:21:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70373' 00:10:25.579 16:21:59 -- common/autotest_common.sh@955 -- # kill 70373 00:10:25.579 16:21:59 -- common/autotest_common.sh@960 -- # wait 70373 00:10:25.837 16:21:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:25.837 16:21:59 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:25.837 16:21:59 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:25.837 16:21:59 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:25.837 16:21:59 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:25.837 16:21:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.837 16:21:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:25.837 16:21:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:25.838 16:21:59 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:25.838 00:10:25.838 real 0m9.298s 00:10:25.838 user 0m28.661s 00:10:25.838 sys 0m1.504s 00:10:25.838 16:21:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:25.838 16:21:59 -- common/autotest_common.sh@10 -- # set +x 00:10:25.838 ************************************ 00:10:25.838 END TEST nvmf_delete_subsystem 00:10:25.838 ************************************ 00:10:25.838 16:21:59 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:10:25.838 16:21:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:25.838 16:21:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:25.838 16:21:59 -- common/autotest_common.sh@10 -- # set +x 00:10:25.838 ************************************ 00:10:25.838 START TEST nvmf_ns_masking 00:10:25.838 ************************************ 00:10:25.838 16:21:59 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:10:26.097 * Looking for test storage... 00:10:26.097 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:26.097 16:21:59 -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:26.097 16:21:59 -- nvmf/common.sh@7 -- # uname -s 00:10:26.097 16:21:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:26.097 16:21:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:26.097 16:21:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:26.097 16:21:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:26.097 16:21:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:26.097 16:21:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:26.097 16:21:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:26.097 16:21:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:26.097 16:21:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:26.097 16:21:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:26.097 16:21:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:10:26.097 16:21:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:10:26.097 16:21:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:26.097 16:21:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:26.097 16:21:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:26.097 16:21:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:26.097 16:21:59 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:26.097 16:21:59 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:26.097 16:21:59 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:26.097 16:21:59 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:26.097 16:21:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.097 16:21:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.097 16:21:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.097 16:21:59 -- paths/export.sh@5 -- # export PATH 00:10:26.097 16:21:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.097 16:21:59 -- nvmf/common.sh@47 -- # : 0 00:10:26.097 16:21:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:26.097 16:21:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:26.097 16:21:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:26.097 16:21:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:26.097 16:21:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:26.097 16:21:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:26.097 16:21:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:26.097 16:21:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:26.097 16:21:59 -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:26.097 16:21:59 -- target/ns_masking.sh@11 -- # loops=5 00:10:26.097 16:21:59 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:10:26.097 16:21:59 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:10:26.097 16:21:59 -- target/ns_masking.sh@15 -- # uuidgen 00:10:26.097 16:21:59 -- target/ns_masking.sh@15 -- # HOSTID=11959eef-e281-4abf-a397-260f70f0dd28 00:10:26.097 16:21:59 -- target/ns_masking.sh@44 -- # nvmftestinit 00:10:26.097 16:21:59 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:26.097 16:21:59 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:26.097 16:21:59 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:26.097 16:21:59 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:26.097 16:21:59 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:26.097 16:21:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:26.097 16:21:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:26.097 16:21:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:26.097 16:21:59 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:10:26.097 16:21:59 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:10:26.097 16:21:59 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:10:26.097 16:21:59 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:10:26.097 16:21:59 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:10:26.097 16:21:59 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:10:26.097 16:21:59 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:26.097 16:21:59 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:26.097 16:21:59 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:26.097 16:21:59 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:26.097 16:21:59 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:26.097 16:21:59 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:26.097 16:21:59 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:26.097 16:21:59 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:26.097 16:21:59 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:26.097 16:21:59 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:26.097 16:21:59 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:26.097 16:21:59 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:26.097 16:21:59 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:26.097 16:21:59 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:26.097 Cannot find device "nvmf_tgt_br" 00:10:26.097 16:22:00 -- nvmf/common.sh@155 -- # true 00:10:26.097 16:22:00 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:26.097 Cannot find device "nvmf_tgt_br2" 00:10:26.097 16:22:00 -- nvmf/common.sh@156 -- # true 00:10:26.097 16:22:00 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:26.097 16:22:00 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:26.097 Cannot find device "nvmf_tgt_br" 00:10:26.097 16:22:00 -- nvmf/common.sh@158 -- # true 00:10:26.097 16:22:00 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:26.097 Cannot find device "nvmf_tgt_br2" 00:10:26.097 16:22:00 -- nvmf/common.sh@159 -- # true 00:10:26.097 16:22:00 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:26.097 16:22:00 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:26.097 16:22:00 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:26.097 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:26.097 16:22:00 -- nvmf/common.sh@162 -- # true 00:10:26.097 16:22:00 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:26.097 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:26.097 16:22:00 -- nvmf/common.sh@163 -- # true 00:10:26.097 16:22:00 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:26.097 16:22:00 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:26.097 16:22:00 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:26.097 16:22:00 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:26.356 16:22:00 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:26.356 16:22:00 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:26.356 16:22:00 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:26.356 16:22:00 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:26.356 16:22:00 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:26.356 16:22:00 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:26.356 16:22:00 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:26.356 16:22:00 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:26.356 16:22:00 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:26.356 16:22:00 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:26.356 16:22:00 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:26.356 16:22:00 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:26.356 16:22:00 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:26.356 16:22:00 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:26.356 16:22:00 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:26.356 16:22:00 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:26.356 16:22:00 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:26.356 16:22:00 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:26.356 16:22:00 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:26.356 16:22:00 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:26.356 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:26.356 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:10:26.356 00:10:26.356 --- 10.0.0.2 ping statistics --- 00:10:26.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.356 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:10:26.356 16:22:00 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:26.356 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:26.356 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:10:26.356 00:10:26.356 --- 10.0.0.3 ping statistics --- 00:10:26.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.356 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:10:26.356 16:22:00 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:26.356 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:26.356 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:10:26.356 00:10:26.356 --- 10.0.0.1 ping statistics --- 00:10:26.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.356 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:10:26.356 16:22:00 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:26.356 16:22:00 -- nvmf/common.sh@422 -- # return 0 00:10:26.356 16:22:00 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:26.356 16:22:00 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:26.356 16:22:00 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:26.356 16:22:00 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:26.356 16:22:00 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:26.356 16:22:00 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:26.356 16:22:00 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:26.356 16:22:00 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:10:26.356 16:22:00 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:26.356 16:22:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:26.356 16:22:00 -- common/autotest_common.sh@10 -- # set +x 00:10:26.356 16:22:00 -- nvmf/common.sh@470 -- # nvmfpid=70712 00:10:26.356 16:22:00 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:26.356 16:22:00 -- nvmf/common.sh@471 -- # waitforlisten 70712 00:10:26.356 16:22:00 -- common/autotest_common.sh@817 -- # '[' -z 70712 ']' 00:10:26.356 16:22:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.356 16:22:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:26.356 16:22:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.356 16:22:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:26.356 16:22:00 -- common/autotest_common.sh@10 -- # set +x 00:10:26.614 [2024-04-17 16:22:00.406583] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:10:26.614 [2024-04-17 16:22:00.406691] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:26.614 [2024-04-17 16:22:00.548278] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:26.872 [2024-04-17 16:22:00.692027] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:26.872 [2024-04-17 16:22:00.692101] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:26.872 [2024-04-17 16:22:00.692116] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:26.872 [2024-04-17 16:22:00.692127] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:26.872 [2024-04-17 16:22:00.692136] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:26.872 [2024-04-17 16:22:00.692559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:26.872 [2024-04-17 16:22:00.692705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:26.872 [2024-04-17 16:22:00.693042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:26.872 [2024-04-17 16:22:00.693062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.436 16:22:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:27.436 16:22:01 -- common/autotest_common.sh@850 -- # return 0 00:10:27.436 16:22:01 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:27.436 16:22:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:27.436 16:22:01 -- common/autotest_common.sh@10 -- # set +x 00:10:27.437 16:22:01 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:27.437 16:22:01 -- target/ns_masking.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:27.694 [2024-04-17 16:22:01.733870] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:27.952 16:22:01 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:10:27.952 16:22:01 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:10:27.952 16:22:01 -- target/ns_masking.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:28.210 Malloc1 00:10:28.210 16:22:02 -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:28.468 Malloc2 00:10:28.468 16:22:02 -- target/ns_masking.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:28.726 16:22:02 -- target/ns_masking.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:10:28.983 16:22:02 -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:28.983 [2024-04-17 16:22:03.014801] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:29.241 16:22:03 -- target/ns_masking.sh@61 -- # connect 00:10:29.241 16:22:03 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 11959eef-e281-4abf-a397-260f70f0dd28 -a 10.0.0.2 -s 4420 -i 4 00:10:29.241 16:22:03 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:10:29.241 16:22:03 -- common/autotest_common.sh@1184 -- # local i=0 00:10:29.241 16:22:03 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:10:29.241 16:22:03 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:10:29.241 16:22:03 -- common/autotest_common.sh@1191 -- # sleep 2 00:10:31.141 16:22:05 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:10:31.141 16:22:05 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:10:31.141 16:22:05 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:10:31.141 16:22:05 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:10:31.141 16:22:05 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:10:31.141 16:22:05 -- common/autotest_common.sh@1194 -- # return 0 00:10:31.141 16:22:05 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:10:31.141 16:22:05 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:31.401 16:22:05 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:10:31.401 16:22:05 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:10:31.401 16:22:05 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:10:31.401 16:22:05 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:31.401 16:22:05 -- target/ns_masking.sh@39 -- # grep 0x1 00:10:31.401 [ 0]:0x1 00:10:31.401 16:22:05 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:31.401 16:22:05 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:31.401 16:22:05 -- target/ns_masking.sh@40 -- # nguid=81b016e798fe48c2bc713346c6d35f1f 00:10:31.401 16:22:05 -- target/ns_masking.sh@41 -- # [[ 81b016e798fe48c2bc713346c6d35f1f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:31.401 16:22:05 -- target/ns_masking.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:10:31.660 16:22:05 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:10:31.660 16:22:05 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:31.660 16:22:05 -- target/ns_masking.sh@39 -- # grep 0x1 00:10:31.660 [ 0]:0x1 00:10:31.660 16:22:05 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:31.660 16:22:05 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:31.660 16:22:05 -- target/ns_masking.sh@40 -- # nguid=81b016e798fe48c2bc713346c6d35f1f 00:10:31.660 16:22:05 -- target/ns_masking.sh@41 -- # [[ 81b016e798fe48c2bc713346c6d35f1f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:31.660 16:22:05 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:10:31.660 16:22:05 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:31.660 16:22:05 -- target/ns_masking.sh@39 -- # grep 0x2 00:10:31.660 [ 1]:0x2 00:10:31.660 16:22:05 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:31.660 16:22:05 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:31.660 16:22:05 -- target/ns_masking.sh@40 -- # nguid=f4936281cc6949cb9bec6c9c3c9a73f4 00:10:31.660 16:22:05 -- target/ns_masking.sh@41 -- # [[ f4936281cc6949cb9bec6c9c3c9a73f4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:31.660 16:22:05 -- target/ns_masking.sh@69 -- # disconnect 00:10:31.660 16:22:05 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:31.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.918 16:22:05 -- target/ns_masking.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.177 16:22:06 -- target/ns_masking.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:10:32.434 16:22:06 -- target/ns_masking.sh@77 -- # connect 1 00:10:32.434 16:22:06 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 11959eef-e281-4abf-a397-260f70f0dd28 -a 10.0.0.2 -s 4420 -i 4 00:10:32.434 16:22:06 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:10:32.434 16:22:06 -- common/autotest_common.sh@1184 -- # local i=0 00:10:32.434 16:22:06 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:10:32.434 16:22:06 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:10:32.434 16:22:06 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:10:32.434 16:22:06 -- common/autotest_common.sh@1191 -- # sleep 2 00:10:34.967 16:22:08 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:10:34.967 16:22:08 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:10:34.967 16:22:08 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:10:34.967 16:22:08 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:10:34.967 16:22:08 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:10:34.967 16:22:08 -- common/autotest_common.sh@1194 -- # return 0 00:10:34.967 16:22:08 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:10:34.967 16:22:08 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:34.967 16:22:08 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:10:34.967 16:22:08 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:10:34.967 16:22:08 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:10:34.967 16:22:08 -- common/autotest_common.sh@638 -- # local es=0 00:10:34.967 16:22:08 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:10:34.967 16:22:08 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:10:34.967 16:22:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:34.967 16:22:08 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:10:34.967 16:22:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:34.967 16:22:08 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:10:34.967 16:22:08 -- target/ns_masking.sh@39 -- # grep 0x1 00:10:34.967 16:22:08 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:34.967 16:22:08 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:34.967 16:22:08 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:34.967 16:22:08 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:10:34.967 16:22:08 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:34.967 16:22:08 -- common/autotest_common.sh@641 -- # es=1 00:10:34.967 16:22:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:34.967 16:22:08 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:34.967 16:22:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:34.967 16:22:08 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:10:34.967 16:22:08 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:34.967 16:22:08 -- target/ns_masking.sh@39 -- # grep 0x2 00:10:34.967 [ 0]:0x2 00:10:34.967 16:22:08 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:34.967 16:22:08 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:34.967 16:22:08 -- target/ns_masking.sh@40 -- # nguid=f4936281cc6949cb9bec6c9c3c9a73f4 00:10:34.967 16:22:08 -- target/ns_masking.sh@41 -- # [[ f4936281cc6949cb9bec6c9c3c9a73f4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:34.967 16:22:08 -- target/ns_masking.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:34.967 16:22:08 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:10:34.967 16:22:08 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:34.967 16:22:08 -- target/ns_masking.sh@39 -- # grep 0x1 00:10:34.967 [ 0]:0x1 00:10:34.967 16:22:08 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:34.967 16:22:08 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:34.967 16:22:08 -- target/ns_masking.sh@40 -- # nguid=81b016e798fe48c2bc713346c6d35f1f 00:10:34.967 16:22:08 -- target/ns_masking.sh@41 -- # [[ 81b016e798fe48c2bc713346c6d35f1f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:34.967 16:22:08 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:10:34.967 16:22:08 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:34.967 16:22:08 -- target/ns_masking.sh@39 -- # grep 0x2 00:10:34.967 [ 1]:0x2 00:10:34.967 16:22:08 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:34.967 16:22:08 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:35.226 16:22:09 -- target/ns_masking.sh@40 -- # nguid=f4936281cc6949cb9bec6c9c3c9a73f4 00:10:35.226 16:22:09 -- target/ns_masking.sh@41 -- # [[ f4936281cc6949cb9bec6c9c3c9a73f4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:35.226 16:22:09 -- target/ns_masking.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:35.485 16:22:09 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:10:35.485 16:22:09 -- common/autotest_common.sh@638 -- # local es=0 00:10:35.485 16:22:09 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:10:35.485 16:22:09 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:10:35.485 16:22:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:35.485 16:22:09 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:10:35.485 16:22:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:35.485 16:22:09 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:10:35.485 16:22:09 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:35.485 16:22:09 -- target/ns_masking.sh@39 -- # grep 0x1 00:10:35.485 16:22:09 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:35.485 16:22:09 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:35.485 16:22:09 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:10:35.485 16:22:09 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:35.485 16:22:09 -- common/autotest_common.sh@641 -- # es=1 00:10:35.485 16:22:09 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:35.485 16:22:09 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:35.485 16:22:09 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:35.485 16:22:09 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:10:35.485 16:22:09 -- target/ns_masking.sh@39 -- # grep 0x2 00:10:35.485 16:22:09 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:35.485 [ 0]:0x2 00:10:35.485 16:22:09 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:35.485 16:22:09 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:35.485 16:22:09 -- target/ns_masking.sh@40 -- # nguid=f4936281cc6949cb9bec6c9c3c9a73f4 00:10:35.485 16:22:09 -- target/ns_masking.sh@41 -- # [[ f4936281cc6949cb9bec6c9c3c9a73f4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:35.485 16:22:09 -- target/ns_masking.sh@91 -- # disconnect 00:10:35.485 16:22:09 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:35.485 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.485 16:22:09 -- target/ns_masking.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:35.743 16:22:09 -- target/ns_masking.sh@95 -- # connect 2 00:10:35.743 16:22:09 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 11959eef-e281-4abf-a397-260f70f0dd28 -a 10.0.0.2 -s 4420 -i 4 00:10:36.002 16:22:09 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:10:36.002 16:22:09 -- common/autotest_common.sh@1184 -- # local i=0 00:10:36.002 16:22:09 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:10:36.002 16:22:09 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:10:36.002 16:22:09 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:10:36.002 16:22:09 -- common/autotest_common.sh@1191 -- # sleep 2 00:10:37.907 16:22:11 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:10:37.907 16:22:11 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:10:37.907 16:22:11 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:10:37.907 16:22:11 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:10:37.907 16:22:11 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:10:37.907 16:22:11 -- common/autotest_common.sh@1194 -- # return 0 00:10:37.907 16:22:11 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:10:37.907 16:22:11 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:37.907 16:22:11 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:10:37.907 16:22:11 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:10:37.907 16:22:11 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:10:37.907 16:22:11 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:37.907 16:22:11 -- target/ns_masking.sh@39 -- # grep 0x1 00:10:37.907 [ 0]:0x1 00:10:37.907 16:22:11 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:37.907 16:22:11 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:38.165 16:22:11 -- target/ns_masking.sh@40 -- # nguid=81b016e798fe48c2bc713346c6d35f1f 00:10:38.165 16:22:11 -- target/ns_masking.sh@41 -- # [[ 81b016e798fe48c2bc713346c6d35f1f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:38.165 16:22:11 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:10:38.165 16:22:11 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:38.165 16:22:11 -- target/ns_masking.sh@39 -- # grep 0x2 00:10:38.165 [ 1]:0x2 00:10:38.165 16:22:11 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:38.165 16:22:11 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:38.165 16:22:12 -- target/ns_masking.sh@40 -- # nguid=f4936281cc6949cb9bec6c9c3c9a73f4 00:10:38.165 16:22:12 -- target/ns_masking.sh@41 -- # [[ f4936281cc6949cb9bec6c9c3c9a73f4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:38.165 16:22:12 -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:38.425 16:22:12 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:10:38.425 16:22:12 -- common/autotest_common.sh@638 -- # local es=0 00:10:38.425 16:22:12 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:10:38.425 16:22:12 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:10:38.425 16:22:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:38.425 16:22:12 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:10:38.425 16:22:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:38.425 16:22:12 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:10:38.425 16:22:12 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:38.425 16:22:12 -- target/ns_masking.sh@39 -- # grep 0x1 00:10:38.425 16:22:12 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:38.425 16:22:12 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:38.425 16:22:12 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:10:38.425 16:22:12 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:38.425 16:22:12 -- common/autotest_common.sh@641 -- # es=1 00:10:38.425 16:22:12 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:38.425 16:22:12 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:38.425 16:22:12 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:38.425 16:22:12 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:10:38.425 16:22:12 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:38.425 16:22:12 -- target/ns_masking.sh@39 -- # grep 0x2 00:10:38.425 [ 0]:0x2 00:10:38.425 16:22:12 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:38.425 16:22:12 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:38.425 16:22:12 -- target/ns_masking.sh@40 -- # nguid=f4936281cc6949cb9bec6c9c3c9a73f4 00:10:38.425 16:22:12 -- target/ns_masking.sh@41 -- # [[ f4936281cc6949cb9bec6c9c3c9a73f4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:38.425 16:22:12 -- target/ns_masking.sh@105 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:38.425 16:22:12 -- common/autotest_common.sh@638 -- # local es=0 00:10:38.425 16:22:12 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:38.425 16:22:12 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:38.425 16:22:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:38.425 16:22:12 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:38.425 16:22:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:38.425 16:22:12 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:38.425 16:22:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:38.425 16:22:12 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:38.425 16:22:12 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:38.425 16:22:12 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:38.685 [2024-04-17 16:22:12.702448] nvmf_rpc.c:1770:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:10:38.685 2024/04/17 16:22:12 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:10:38.685 request: 00:10:38.685 { 00:10:38.685 "method": "nvmf_ns_remove_host", 00:10:38.685 "params": { 00:10:38.685 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:38.685 "nsid": 2, 00:10:38.685 "host": "nqn.2016-06.io.spdk:host1" 00:10:38.685 } 00:10:38.685 } 00:10:38.685 Got JSON-RPC error response 00:10:38.685 GoRPCClient: error on JSON-RPC call 00:10:38.953 16:22:12 -- common/autotest_common.sh@641 -- # es=1 00:10:38.953 16:22:12 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:38.953 16:22:12 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:38.953 16:22:12 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:38.953 16:22:12 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:10:38.953 16:22:12 -- common/autotest_common.sh@638 -- # local es=0 00:10:38.953 16:22:12 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:10:38.953 16:22:12 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:10:38.953 16:22:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:38.953 16:22:12 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:10:38.953 16:22:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:38.953 16:22:12 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:10:38.953 16:22:12 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:38.953 16:22:12 -- target/ns_masking.sh@39 -- # grep 0x1 00:10:38.953 16:22:12 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:38.953 16:22:12 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:38.953 16:22:12 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:10:38.953 16:22:12 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:38.953 16:22:12 -- common/autotest_common.sh@641 -- # es=1 00:10:38.953 16:22:12 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:38.953 16:22:12 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:38.953 16:22:12 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:38.953 16:22:12 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:10:38.953 16:22:12 -- target/ns_masking.sh@39 -- # grep 0x2 00:10:38.953 16:22:12 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:38.953 [ 0]:0x2 00:10:38.953 16:22:12 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:38.953 16:22:12 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:38.953 16:22:12 -- target/ns_masking.sh@40 -- # nguid=f4936281cc6949cb9bec6c9c3c9a73f4 00:10:38.953 16:22:12 -- target/ns_masking.sh@41 -- # [[ f4936281cc6949cb9bec6c9c3c9a73f4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:38.953 16:22:12 -- target/ns_masking.sh@108 -- # disconnect 00:10:38.953 16:22:12 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:38.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.953 16:22:12 -- target/ns_masking.sh@110 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:39.216 16:22:13 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:10:39.216 16:22:13 -- target/ns_masking.sh@114 -- # nvmftestfini 00:10:39.216 16:22:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:39.216 16:22:13 -- nvmf/common.sh@117 -- # sync 00:10:39.216 16:22:13 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:39.216 16:22:13 -- nvmf/common.sh@120 -- # set +e 00:10:39.216 16:22:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:39.216 16:22:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:39.216 rmmod nvme_tcp 00:10:39.216 rmmod nvme_fabrics 00:10:39.216 rmmod nvme_keyring 00:10:39.216 16:22:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:39.216 16:22:13 -- nvmf/common.sh@124 -- # set -e 00:10:39.216 16:22:13 -- nvmf/common.sh@125 -- # return 0 00:10:39.216 16:22:13 -- nvmf/common.sh@478 -- # '[' -n 70712 ']' 00:10:39.216 16:22:13 -- nvmf/common.sh@479 -- # killprocess 70712 00:10:39.216 16:22:13 -- common/autotest_common.sh@936 -- # '[' -z 70712 ']' 00:10:39.216 16:22:13 -- common/autotest_common.sh@940 -- # kill -0 70712 00:10:39.216 16:22:13 -- common/autotest_common.sh@941 -- # uname 00:10:39.216 16:22:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:39.216 16:22:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70712 00:10:39.483 killing process with pid 70712 00:10:39.483 16:22:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:39.483 16:22:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:39.483 16:22:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70712' 00:10:39.483 16:22:13 -- common/autotest_common.sh@955 -- # kill 70712 00:10:39.483 16:22:13 -- common/autotest_common.sh@960 -- # wait 70712 00:10:39.741 16:22:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:39.741 16:22:13 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:39.741 16:22:13 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:39.741 16:22:13 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:39.741 16:22:13 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:39.741 16:22:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.741 16:22:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:39.741 16:22:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.741 16:22:13 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:39.741 00:10:39.741 real 0m13.799s 00:10:39.741 user 0m54.943s 00:10:39.741 sys 0m2.208s 00:10:39.741 16:22:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:39.741 16:22:13 -- common/autotest_common.sh@10 -- # set +x 00:10:39.741 ************************************ 00:10:39.742 END TEST nvmf_ns_masking 00:10:39.742 ************************************ 00:10:39.742 16:22:13 -- nvmf/nvmf.sh@37 -- # [[ 0 -eq 1 ]] 00:10:39.742 16:22:13 -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:10:39.742 16:22:13 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:39.742 16:22:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:39.742 16:22:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:39.742 16:22:13 -- common/autotest_common.sh@10 -- # set +x 00:10:40.001 ************************************ 00:10:40.001 START TEST nvmf_host_management 00:10:40.001 ************************************ 00:10:40.001 16:22:13 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:40.001 * Looking for test storage... 00:10:40.001 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:40.001 16:22:13 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:40.001 16:22:13 -- nvmf/common.sh@7 -- # uname -s 00:10:40.001 16:22:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:40.001 16:22:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:40.001 16:22:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:40.001 16:22:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:40.001 16:22:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:40.001 16:22:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:40.001 16:22:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:40.001 16:22:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:40.001 16:22:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:40.001 16:22:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:40.001 16:22:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:10:40.001 16:22:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:10:40.001 16:22:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:40.001 16:22:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:40.001 16:22:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:40.001 16:22:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:40.001 16:22:13 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:40.001 16:22:13 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:40.001 16:22:13 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:40.001 16:22:13 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:40.001 16:22:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.001 16:22:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.001 16:22:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.001 16:22:13 -- paths/export.sh@5 -- # export PATH 00:10:40.001 16:22:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.001 16:22:13 -- nvmf/common.sh@47 -- # : 0 00:10:40.001 16:22:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:40.001 16:22:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:40.001 16:22:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:40.001 16:22:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:40.001 16:22:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:40.001 16:22:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:40.001 16:22:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:40.001 16:22:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:40.001 16:22:13 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:40.001 16:22:13 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:40.001 16:22:13 -- target/host_management.sh@104 -- # nvmftestinit 00:10:40.001 16:22:13 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:40.001 16:22:13 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:40.001 16:22:13 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:40.001 16:22:13 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:40.001 16:22:13 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:40.001 16:22:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.001 16:22:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:40.001 16:22:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.001 16:22:13 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:10:40.001 16:22:13 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:10:40.001 16:22:13 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:10:40.001 16:22:13 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:10:40.001 16:22:13 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:10:40.001 16:22:13 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:10:40.001 16:22:13 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:40.001 16:22:13 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:40.001 16:22:13 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:40.001 16:22:13 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:40.001 16:22:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:40.001 16:22:13 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:40.001 16:22:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:40.001 16:22:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:40.001 16:22:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:40.001 16:22:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:40.001 16:22:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:40.001 16:22:13 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:40.001 16:22:13 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:40.001 16:22:13 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:40.001 Cannot find device "nvmf_tgt_br" 00:10:40.001 16:22:13 -- nvmf/common.sh@155 -- # true 00:10:40.001 16:22:13 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:40.001 Cannot find device "nvmf_tgt_br2" 00:10:40.001 16:22:13 -- nvmf/common.sh@156 -- # true 00:10:40.001 16:22:13 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:40.001 16:22:13 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:40.001 Cannot find device "nvmf_tgt_br" 00:10:40.001 16:22:13 -- nvmf/common.sh@158 -- # true 00:10:40.001 16:22:13 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:40.001 Cannot find device "nvmf_tgt_br2" 00:10:40.001 16:22:13 -- nvmf/common.sh@159 -- # true 00:10:40.001 16:22:13 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:40.001 16:22:14 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:40.288 16:22:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:40.288 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:40.288 16:22:14 -- nvmf/common.sh@162 -- # true 00:10:40.288 16:22:14 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:40.288 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:40.288 16:22:14 -- nvmf/common.sh@163 -- # true 00:10:40.288 16:22:14 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:40.288 16:22:14 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:40.288 16:22:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:40.288 16:22:14 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:40.288 16:22:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:40.288 16:22:14 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:40.288 16:22:14 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:40.288 16:22:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:40.288 16:22:14 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:40.288 16:22:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:40.288 16:22:14 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:40.288 16:22:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:40.288 16:22:14 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:40.288 16:22:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:40.288 16:22:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:40.288 16:22:14 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:40.288 16:22:14 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:40.288 16:22:14 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:40.288 16:22:14 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:40.288 16:22:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:40.288 16:22:14 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:40.288 16:22:14 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:40.288 16:22:14 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:40.288 16:22:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:40.288 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:40.288 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:10:40.288 00:10:40.288 --- 10.0.0.2 ping statistics --- 00:10:40.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.288 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:10:40.288 16:22:14 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:40.288 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:40.288 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:10:40.288 00:10:40.288 --- 10.0.0.3 ping statistics --- 00:10:40.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.288 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:10:40.288 16:22:14 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:40.288 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:40.288 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:10:40.288 00:10:40.288 --- 10.0.0.1 ping statistics --- 00:10:40.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.289 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:10:40.289 16:22:14 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:40.289 16:22:14 -- nvmf/common.sh@422 -- # return 0 00:10:40.289 16:22:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:40.289 16:22:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:40.289 16:22:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:40.289 16:22:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:40.289 16:22:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:40.289 16:22:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:40.289 16:22:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:40.289 16:22:14 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:10:40.289 16:22:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:40.289 16:22:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:40.289 16:22:14 -- common/autotest_common.sh@10 -- # set +x 00:10:40.547 ************************************ 00:10:40.547 START TEST nvmf_host_management 00:10:40.547 ************************************ 00:10:40.547 16:22:14 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:10:40.547 16:22:14 -- target/host_management.sh@69 -- # starttarget 00:10:40.547 16:22:14 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:10:40.547 16:22:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:40.547 16:22:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:40.547 16:22:14 -- common/autotest_common.sh@10 -- # set +x 00:10:40.547 16:22:14 -- nvmf/common.sh@470 -- # nvmfpid=71282 00:10:40.547 16:22:14 -- nvmf/common.sh@471 -- # waitforlisten 71282 00:10:40.547 16:22:14 -- common/autotest_common.sh@817 -- # '[' -z 71282 ']' 00:10:40.547 16:22:14 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:10:40.547 16:22:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.547 16:22:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:40.547 16:22:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.547 16:22:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:40.547 16:22:14 -- common/autotest_common.sh@10 -- # set +x 00:10:40.547 [2024-04-17 16:22:14.405746] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:10:40.547 [2024-04-17 16:22:14.405855] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:40.547 [2024-04-17 16:22:14.543535] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:40.805 [2024-04-17 16:22:14.673403] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:40.805 [2024-04-17 16:22:14.673912] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:40.805 [2024-04-17 16:22:14.674416] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:40.805 [2024-04-17 16:22:14.674646] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:40.805 [2024-04-17 16:22:14.675069] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:40.805 [2024-04-17 16:22:14.675488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:40.805 [2024-04-17 16:22:14.675571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:40.805 [2024-04-17 16:22:14.675692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:40.805 [2024-04-17 16:22:14.675814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:41.739 16:22:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:41.739 16:22:15 -- common/autotest_common.sh@850 -- # return 0 00:10:41.739 16:22:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:41.739 16:22:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:41.739 16:22:15 -- common/autotest_common.sh@10 -- # set +x 00:10:41.739 16:22:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:41.739 16:22:15 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:41.739 16:22:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:41.739 16:22:15 -- common/autotest_common.sh@10 -- # set +x 00:10:41.739 [2024-04-17 16:22:15.473297] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:41.739 16:22:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:41.739 16:22:15 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:10:41.739 16:22:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:41.739 16:22:15 -- common/autotest_common.sh@10 -- # set +x 00:10:41.739 16:22:15 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:10:41.739 16:22:15 -- target/host_management.sh@23 -- # cat 00:10:41.739 16:22:15 -- target/host_management.sh@30 -- # rpc_cmd 00:10:41.739 16:22:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:41.739 16:22:15 -- common/autotest_common.sh@10 -- # set +x 00:10:41.739 Malloc0 00:10:41.739 [2024-04-17 16:22:15.558518] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:41.739 16:22:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:41.739 16:22:15 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:10:41.739 16:22:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:41.739 16:22:15 -- common/autotest_common.sh@10 -- # set +x 00:10:41.739 16:22:15 -- target/host_management.sh@73 -- # perfpid=71358 00:10:41.739 16:22:15 -- target/host_management.sh@74 -- # waitforlisten 71358 /var/tmp/bdevperf.sock 00:10:41.739 16:22:15 -- common/autotest_common.sh@817 -- # '[' -z 71358 ']' 00:10:41.739 16:22:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:41.739 16:22:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:41.739 16:22:15 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:10:41.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:41.739 16:22:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:41.739 16:22:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:41.739 16:22:15 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:10:41.739 16:22:15 -- common/autotest_common.sh@10 -- # set +x 00:10:41.739 16:22:15 -- nvmf/common.sh@521 -- # config=() 00:10:41.739 16:22:15 -- nvmf/common.sh@521 -- # local subsystem config 00:10:41.739 16:22:15 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:10:41.739 16:22:15 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:10:41.739 { 00:10:41.739 "params": { 00:10:41.739 "name": "Nvme$subsystem", 00:10:41.739 "trtype": "$TEST_TRANSPORT", 00:10:41.739 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:41.739 "adrfam": "ipv4", 00:10:41.739 "trsvcid": "$NVMF_PORT", 00:10:41.739 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:41.739 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:41.739 "hdgst": ${hdgst:-false}, 00:10:41.739 "ddgst": ${ddgst:-false} 00:10:41.739 }, 00:10:41.740 "method": "bdev_nvme_attach_controller" 00:10:41.740 } 00:10:41.740 EOF 00:10:41.740 )") 00:10:41.740 16:22:15 -- nvmf/common.sh@543 -- # cat 00:10:41.740 16:22:15 -- nvmf/common.sh@545 -- # jq . 00:10:41.740 16:22:15 -- nvmf/common.sh@546 -- # IFS=, 00:10:41.740 16:22:15 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:10:41.740 "params": { 00:10:41.740 "name": "Nvme0", 00:10:41.740 "trtype": "tcp", 00:10:41.740 "traddr": "10.0.0.2", 00:10:41.740 "adrfam": "ipv4", 00:10:41.740 "trsvcid": "4420", 00:10:41.740 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:41.740 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:41.740 "hdgst": false, 00:10:41.740 "ddgst": false 00:10:41.740 }, 00:10:41.740 "method": "bdev_nvme_attach_controller" 00:10:41.740 }' 00:10:41.740 [2024-04-17 16:22:15.681031] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:10:41.740 [2024-04-17 16:22:15.681181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71358 ] 00:10:41.998 [2024-04-17 16:22:15.821945] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.998 [2024-04-17 16:22:15.970873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.256 Running I/O for 10 seconds... 00:10:42.825 16:22:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:42.825 16:22:16 -- common/autotest_common.sh@850 -- # return 0 00:10:42.825 16:22:16 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:10:42.825 16:22:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:42.825 16:22:16 -- common/autotest_common.sh@10 -- # set +x 00:10:42.825 16:22:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:42.825 16:22:16 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:42.825 16:22:16 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:10:42.825 16:22:16 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:10:42.825 16:22:16 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:10:42.825 16:22:16 -- target/host_management.sh@52 -- # local ret=1 00:10:42.825 16:22:16 -- target/host_management.sh@53 -- # local i 00:10:42.825 16:22:16 -- target/host_management.sh@54 -- # (( i = 10 )) 00:10:42.825 16:22:16 -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:42.825 16:22:16 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:42.825 16:22:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:42.825 16:22:16 -- common/autotest_common.sh@10 -- # set +x 00:10:42.825 16:22:16 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:42.825 16:22:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:42.825 16:22:16 -- target/host_management.sh@55 -- # read_io_count=643 00:10:42.825 16:22:16 -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:10:42.825 16:22:16 -- target/host_management.sh@59 -- # ret=0 00:10:42.825 16:22:16 -- target/host_management.sh@60 -- # break 00:10:42.825 16:22:16 -- target/host_management.sh@64 -- # return 0 00:10:42.825 16:22:16 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:42.825 16:22:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:42.825 16:22:16 -- common/autotest_common.sh@10 -- # set +x 00:10:42.825 [2024-04-17 16:22:16.720469] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.825 [2024-04-17 16:22:16.720543] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.825 [2024-04-17 16:22:16.720555] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.825 [2024-04-17 16:22:16.720565] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.825 [2024-04-17 16:22:16.720575] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.825 [2024-04-17 16:22:16.720584] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.825 [2024-04-17 16:22:16.720593] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.825 [2024-04-17 16:22:16.720602] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.825 [2024-04-17 16:22:16.720611] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.825 [2024-04-17 16:22:16.720620] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.825 [2024-04-17 16:22:16.720628] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.825 [2024-04-17 16:22:16.720637] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.825 [2024-04-17 16:22:16.720646] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.825 [2024-04-17 16:22:16.720655] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.825 [2024-04-17 16:22:16.720664] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.825 [2024-04-17 16:22:16.720672] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.825 [2024-04-17 16:22:16.720681] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.825 [2024-04-17 16:22:16.720689] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.825 [2024-04-17 16:22:16.720698] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.825 [2024-04-17 16:22:16.720721] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.825 [2024-04-17 16:22:16.720730] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.825 [2024-04-17 16:22:16.720738] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.825 [2024-04-17 16:22:16.720746] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.825 [2024-04-17 16:22:16.720755] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.825 [2024-04-17 16:22:16.720763] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.825 [2024-04-17 16:22:16.720802] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.825 [2024-04-17 16:22:16.720815] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.825 [2024-04-17 16:22:16.720824] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.825 [2024-04-17 16:22:16.720832] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.825 [2024-04-17 16:22:16.720841] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.825 [2024-04-17 16:22:16.720849] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.825 [2024-04-17 16:22:16.720859] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.825 [2024-04-17 16:22:16.720867] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.825 [2024-04-17 16:22:16.720876] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.825 [2024-04-17 16:22:16.720884] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.826 [2024-04-17 16:22:16.720892] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.826 [2024-04-17 16:22:16.720900] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.826 [2024-04-17 16:22:16.720908] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.826 [2024-04-17 16:22:16.720916] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.826 [2024-04-17 16:22:16.720924] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.826 [2024-04-17 16:22:16.720933] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.826 [2024-04-17 16:22:16.720941] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.826 [2024-04-17 16:22:16.720950] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.826 [2024-04-17 16:22:16.720958] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.826 [2024-04-17 16:22:16.720966] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.826 [2024-04-17 16:22:16.720974] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.826 [2024-04-17 16:22:16.720995] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.826 [2024-04-17 16:22:16.721005] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.826 [2024-04-17 16:22:16.721014] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.826 [2024-04-17 16:22:16.721022] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.826 [2024-04-17 16:22:16.721030] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.826 [2024-04-17 16:22:16.721046] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.826 [2024-04-17 16:22:16.721055] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.826 [2024-04-17 16:22:16.721063] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.826 [2024-04-17 16:22:16.721072] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.826 [2024-04-17 16:22:16.721080] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.826 [2024-04-17 16:22:16.721089] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.826 [2024-04-17 16:22:16.721097] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.826 [2024-04-17 16:22:16.721105] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.826 [2024-04-17 16:22:16.721114] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.826 [2024-04-17 16:22:16.721122] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.826 [2024-04-17 16:22:16.721130] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.826 [2024-04-17 16:22:16.721139] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddfc50 is same with the state(5) to be set 00:10:42.826 16:22:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:42.826 16:22:16 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:42.826 16:22:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:42.826 16:22:16 -- common/autotest_common.sh@10 -- # set +x 00:10:42.826 16:22:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:42.826 16:22:16 -- target/host_management.sh@87 -- # sleep 1 00:10:42.826 [2024-04-17 16:22:16.741410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:10:42.826 [2024-04-17 16:22:16.741463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.826 [2024-04-17 16:22:16.741478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:10:42.826 [2024-04-17 16:22:16.741488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.826 [2024-04-17 16:22:16.741502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:10:42.826 [2024-04-17 16:22:16.741512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.826 [2024-04-17 16:22:16.741523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:10:42.826 [2024-04-17 16:22:16.741532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.826 [2024-04-17 16:22:16.741542] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ad0a0 is same with the state(5) to be set 00:10:42.826 [2024-04-17 16:22:16.741656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.826 [2024-04-17 16:22:16.741672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.826 [2024-04-17 16:22:16.741693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.826 [2024-04-17 16:22:16.741704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.826 [2024-04-17 16:22:16.741717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.826 [2024-04-17 16:22:16.741727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.826 [2024-04-17 16:22:16.741740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.826 [2024-04-17 16:22:16.741749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.826 [2024-04-17 16:22:16.741761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.826 [2024-04-17 16:22:16.741784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.826 [2024-04-17 16:22:16.741799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.826 [2024-04-17 16:22:16.741809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.826 [2024-04-17 16:22:16.741821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.826 [2024-04-17 16:22:16.741830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.826 [2024-04-17 16:22:16.741842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.826 [2024-04-17 16:22:16.741852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.826 [2024-04-17 16:22:16.741864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.826 [2024-04-17 16:22:16.741875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.826 [2024-04-17 16:22:16.741887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.826 [2024-04-17 16:22:16.741896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.826 [2024-04-17 16:22:16.741917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.826 [2024-04-17 16:22:16.741928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.826 [2024-04-17 16:22:16.741940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.826 [2024-04-17 16:22:16.741950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.826 [2024-04-17 16:22:16.741961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.826 [2024-04-17 16:22:16.741972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.826 [2024-04-17 16:22:16.741984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.826 [2024-04-17 16:22:16.741993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.826 [2024-04-17 16:22:16.742004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.826 [2024-04-17 16:22:16.742014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.826 [2024-04-17 16:22:16.742025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.826 [2024-04-17 16:22:16.742035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.826 [2024-04-17 16:22:16.742047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.826 [2024-04-17 16:22:16.742057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.827 [2024-04-17 16:22:16.742068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.827 [2024-04-17 16:22:16.742078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.827 [2024-04-17 16:22:16.742089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.827 [2024-04-17 16:22:16.742099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.827 [2024-04-17 16:22:16.742110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.827 [2024-04-17 16:22:16.742120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.827 [2024-04-17 16:22:16.742132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.827 [2024-04-17 16:22:16.742142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.827 [2024-04-17 16:22:16.742154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.827 [2024-04-17 16:22:16.742164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.827 [2024-04-17 16:22:16.742175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.827 [2024-04-17 16:22:16.742185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.827 [2024-04-17 16:22:16.742197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.827 [2024-04-17 16:22:16.742206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.827 [2024-04-17 16:22:16.742217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.827 [2024-04-17 16:22:16.742226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.827 [2024-04-17 16:22:16.742238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.827 [2024-04-17 16:22:16.742247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.827 [2024-04-17 16:22:16.742264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.827 [2024-04-17 16:22:16.742274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.827 [2024-04-17 16:22:16.742286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.827 [2024-04-17 16:22:16.742295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.827 [2024-04-17 16:22:16.742308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.827 [2024-04-17 16:22:16.742317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.827 [2024-04-17 16:22:16.742329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.827 [2024-04-17 16:22:16.742339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.827 [2024-04-17 16:22:16.742350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.827 [2024-04-17 16:22:16.742360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.827 [2024-04-17 16:22:16.742371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.827 [2024-04-17 16:22:16.742381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.827 [2024-04-17 16:22:16.742392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.827 [2024-04-17 16:22:16.742402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.827 [2024-04-17 16:22:16.742414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.827 [2024-04-17 16:22:16.742424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.827 [2024-04-17 16:22:16.742436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.827 [2024-04-17 16:22:16.742445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.827 [2024-04-17 16:22:16.742456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.827 [2024-04-17 16:22:16.742466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.827 [2024-04-17 16:22:16.742478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.827 [2024-04-17 16:22:16.742487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.827 [2024-04-17 16:22:16.742499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.827 [2024-04-17 16:22:16.742508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.827 [2024-04-17 16:22:16.742520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.827 [2024-04-17 16:22:16.742529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.827 [2024-04-17 16:22:16.742540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.827 [2024-04-17 16:22:16.742549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.827 [2024-04-17 16:22:16.742561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.827 [2024-04-17 16:22:16.742570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.827 [2024-04-17 16:22:16.742582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.827 [2024-04-17 16:22:16.742591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.827 [2024-04-17 16:22:16.742609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.827 [2024-04-17 16:22:16.742619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.827 [2024-04-17 16:22:16.742630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.827 [2024-04-17 16:22:16.742641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.827 [2024-04-17 16:22:16.742653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.827 [2024-04-17 16:22:16.742664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.827 [2024-04-17 16:22:16.742676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.827 [2024-04-17 16:22:16.742686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.827 [2024-04-17 16:22:16.742697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.827 [2024-04-17 16:22:16.742707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.827 [2024-04-17 16:22:16.742718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.827 [2024-04-17 16:22:16.742727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.827 [2024-04-17 16:22:16.742738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.827 [2024-04-17 16:22:16.742748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.827 [2024-04-17 16:22:16.742760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.827 [2024-04-17 16:22:16.742778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.827 [2024-04-17 16:22:16.742792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.827 [2024-04-17 16:22:16.742802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.827 [2024-04-17 16:22:16.742814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.827 [2024-04-17 16:22:16.742823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.827 [2024-04-17 16:22:16.742835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.827 [2024-04-17 16:22:16.742845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.827 [2024-04-17 16:22:16.742856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.827 [2024-04-17 16:22:16.742865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.827 [2024-04-17 16:22:16.742877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.827 [2024-04-17 16:22:16.742887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.827 [2024-04-17 16:22:16.742898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.827 [2024-04-17 16:22:16.742908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.828 [2024-04-17 16:22:16.742919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.828 [2024-04-17 16:22:16.742928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.828 [2024-04-17 16:22:16.742939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.828 [2024-04-17 16:22:16.742949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.828 [2024-04-17 16:22:16.742965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.828 [2024-04-17 16:22:16.742974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.828 [2024-04-17 16:22:16.742986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.828 [2024-04-17 16:22:16.742996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.828 [2024-04-17 16:22:16.743007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.828 [2024-04-17 16:22:16.743017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.828 [2024-04-17 16:22:16.743028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.828 [2024-04-17 16:22:16.743038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.828 [2024-04-17 16:22:16.743049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.828 [2024-04-17 16:22:16.743058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.828 [2024-04-17 16:22:16.743070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:42.828 [2024-04-17 16:22:16.743079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.828 [2024-04-17 16:22:16.743180] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9ac690 was disconnected and freed. reset controller. 00:10:42.828 [2024-04-17 16:22:16.744304] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:10:42.828 task offset: 98176 on job bdev=Nvme0n1 fails 00:10:42.828 00:10:42.828 Latency(us) 00:10:42.828 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:42.828 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:42.828 Job: Nvme0n1 ended in about 0.59 seconds with error 00:10:42.828 Verification LBA range: start 0x0 length 0x400 00:10:42.828 Nvme0n1 : 0.59 1296.63 81.04 108.19 0.00 44344.35 2204.39 40989.79 00:10:42.828 =================================================================================================================== 00:10:42.828 Total : 1296.63 81.04 108.19 0.00 44344.35 2204.39 40989.79 00:10:42.828 [2024-04-17 16:22:16.746724] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:42.828 [2024-04-17 16:22:16.746754] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ad0a0 (9): Bad file descriptor 00:10:42.828 [2024-04-17 16:22:16.751604] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:43.763 16:22:17 -- target/host_management.sh@91 -- # kill -9 71358 00:10:43.763 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (71358) - No such process 00:10:43.763 16:22:17 -- target/host_management.sh@91 -- # true 00:10:43.763 16:22:17 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:10:43.763 16:22:17 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:10:43.763 16:22:17 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:10:43.763 16:22:17 -- nvmf/common.sh@521 -- # config=() 00:10:43.763 16:22:17 -- nvmf/common.sh@521 -- # local subsystem config 00:10:43.763 16:22:17 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:10:43.763 16:22:17 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:10:43.763 { 00:10:43.763 "params": { 00:10:43.763 "name": "Nvme$subsystem", 00:10:43.763 "trtype": "$TEST_TRANSPORT", 00:10:43.763 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:43.763 "adrfam": "ipv4", 00:10:43.763 "trsvcid": "$NVMF_PORT", 00:10:43.763 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:43.763 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:43.763 "hdgst": ${hdgst:-false}, 00:10:43.763 "ddgst": ${ddgst:-false} 00:10:43.763 }, 00:10:43.763 "method": "bdev_nvme_attach_controller" 00:10:43.763 } 00:10:43.763 EOF 00:10:43.763 )") 00:10:43.763 16:22:17 -- nvmf/common.sh@543 -- # cat 00:10:43.763 16:22:17 -- nvmf/common.sh@545 -- # jq . 00:10:43.763 16:22:17 -- nvmf/common.sh@546 -- # IFS=, 00:10:43.763 16:22:17 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:10:43.763 "params": { 00:10:43.763 "name": "Nvme0", 00:10:43.763 "trtype": "tcp", 00:10:43.763 "traddr": "10.0.0.2", 00:10:43.763 "adrfam": "ipv4", 00:10:43.763 "trsvcid": "4420", 00:10:43.763 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:43.763 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:43.763 "hdgst": false, 00:10:43.763 "ddgst": false 00:10:43.763 }, 00:10:43.763 "method": "bdev_nvme_attach_controller" 00:10:43.763 }' 00:10:43.763 [2024-04-17 16:22:17.793690] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:10:43.763 [2024-04-17 16:22:17.793789] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71404 ] 00:10:44.022 [2024-04-17 16:22:17.930990] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.281 [2024-04-17 16:22:18.102131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.281 Running I/O for 1 seconds... 00:10:45.654 00:10:45.654 Latency(us) 00:10:45.654 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:45.654 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:45.654 Verification LBA range: start 0x0 length 0x400 00:10:45.654 Nvme0n1 : 1.03 1371.18 85.70 0.00 0.00 45769.57 6136.55 45279.42 00:10:45.654 =================================================================================================================== 00:10:45.654 Total : 1371.18 85.70 0.00 0.00 45769.57 6136.55 45279.42 00:10:45.654 16:22:19 -- target/host_management.sh@101 -- # stoptarget 00:10:45.654 16:22:19 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:10:45.654 16:22:19 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:10:45.654 16:22:19 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:10:45.654 16:22:19 -- target/host_management.sh@40 -- # nvmftestfini 00:10:45.654 16:22:19 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:45.654 16:22:19 -- nvmf/common.sh@117 -- # sync 00:10:45.654 16:22:19 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:45.654 16:22:19 -- nvmf/common.sh@120 -- # set +e 00:10:45.654 16:22:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:45.654 16:22:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:45.654 rmmod nvme_tcp 00:10:45.654 rmmod nvme_fabrics 00:10:45.654 rmmod nvme_keyring 00:10:45.654 16:22:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:45.654 16:22:19 -- nvmf/common.sh@124 -- # set -e 00:10:45.654 16:22:19 -- nvmf/common.sh@125 -- # return 0 00:10:45.654 16:22:19 -- nvmf/common.sh@478 -- # '[' -n 71282 ']' 00:10:45.654 16:22:19 -- nvmf/common.sh@479 -- # killprocess 71282 00:10:45.654 16:22:19 -- common/autotest_common.sh@936 -- # '[' -z 71282 ']' 00:10:45.654 16:22:19 -- common/autotest_common.sh@940 -- # kill -0 71282 00:10:45.654 16:22:19 -- common/autotest_common.sh@941 -- # uname 00:10:45.654 16:22:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:45.654 16:22:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71282 00:10:45.911 16:22:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:45.911 16:22:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:45.911 killing process with pid 71282 00:10:45.911 16:22:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71282' 00:10:45.911 16:22:19 -- common/autotest_common.sh@955 -- # kill 71282 00:10:45.911 16:22:19 -- common/autotest_common.sh@960 -- # wait 71282 00:10:46.168 [2024-04-17 16:22:19.998646] app.c: 628:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:10:46.168 16:22:20 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:46.168 16:22:20 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:46.168 16:22:20 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:46.168 16:22:20 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:46.168 16:22:20 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:46.168 16:22:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.168 16:22:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:46.168 16:22:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.168 16:22:20 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:46.168 00:10:46.168 real 0m5.722s 00:10:46.168 user 0m23.910s 00:10:46.168 sys 0m1.217s 00:10:46.168 16:22:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:46.168 16:22:20 -- common/autotest_common.sh@10 -- # set +x 00:10:46.168 ************************************ 00:10:46.168 END TEST nvmf_host_management 00:10:46.168 ************************************ 00:10:46.168 16:22:20 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:10:46.168 00:10:46.168 real 0m6.319s 00:10:46.168 user 0m24.068s 00:10:46.168 sys 0m1.495s 00:10:46.168 16:22:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:46.168 16:22:20 -- common/autotest_common.sh@10 -- # set +x 00:10:46.168 ************************************ 00:10:46.168 END TEST nvmf_host_management 00:10:46.168 ************************************ 00:10:46.168 16:22:20 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:46.168 16:22:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:46.168 16:22:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:46.168 16:22:20 -- common/autotest_common.sh@10 -- # set +x 00:10:46.426 ************************************ 00:10:46.426 START TEST nvmf_lvol 00:10:46.426 ************************************ 00:10:46.426 16:22:20 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:46.426 * Looking for test storage... 00:10:46.426 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:46.426 16:22:20 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:46.426 16:22:20 -- nvmf/common.sh@7 -- # uname -s 00:10:46.426 16:22:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:46.426 16:22:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:46.426 16:22:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:46.426 16:22:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:46.426 16:22:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:46.426 16:22:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:46.426 16:22:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:46.426 16:22:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:46.426 16:22:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:46.426 16:22:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:46.426 16:22:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:10:46.426 16:22:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:10:46.426 16:22:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:46.426 16:22:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:46.426 16:22:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:46.426 16:22:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:46.426 16:22:20 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:46.426 16:22:20 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:46.426 16:22:20 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:46.426 16:22:20 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:46.426 16:22:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.426 16:22:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.427 16:22:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.427 16:22:20 -- paths/export.sh@5 -- # export PATH 00:10:46.427 16:22:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.427 16:22:20 -- nvmf/common.sh@47 -- # : 0 00:10:46.427 16:22:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:46.427 16:22:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:46.427 16:22:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:46.427 16:22:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:46.427 16:22:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:46.427 16:22:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:46.427 16:22:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:46.427 16:22:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:46.427 16:22:20 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:46.427 16:22:20 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:46.427 16:22:20 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:10:46.427 16:22:20 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:10:46.427 16:22:20 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:46.427 16:22:20 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:10:46.427 16:22:20 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:46.427 16:22:20 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:46.427 16:22:20 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:46.427 16:22:20 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:46.427 16:22:20 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:46.427 16:22:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.427 16:22:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:46.427 16:22:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.427 16:22:20 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:10:46.427 16:22:20 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:10:46.427 16:22:20 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:10:46.427 16:22:20 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:10:46.427 16:22:20 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:10:46.427 16:22:20 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:10:46.427 16:22:20 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:46.427 16:22:20 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:46.427 16:22:20 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:46.427 16:22:20 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:46.427 16:22:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:46.427 16:22:20 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:46.427 16:22:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:46.427 16:22:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:46.427 16:22:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:46.427 16:22:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:46.427 16:22:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:46.427 16:22:20 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:46.427 16:22:20 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:46.427 16:22:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:46.427 Cannot find device "nvmf_tgt_br" 00:10:46.427 16:22:20 -- nvmf/common.sh@155 -- # true 00:10:46.427 16:22:20 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:46.427 Cannot find device "nvmf_tgt_br2" 00:10:46.427 16:22:20 -- nvmf/common.sh@156 -- # true 00:10:46.427 16:22:20 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:46.427 16:22:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:46.427 Cannot find device "nvmf_tgt_br" 00:10:46.427 16:22:20 -- nvmf/common.sh@158 -- # true 00:10:46.427 16:22:20 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:46.427 Cannot find device "nvmf_tgt_br2" 00:10:46.427 16:22:20 -- nvmf/common.sh@159 -- # true 00:10:46.427 16:22:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:46.427 16:22:20 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:46.742 16:22:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:46.742 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:46.742 16:22:20 -- nvmf/common.sh@162 -- # true 00:10:46.742 16:22:20 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:46.742 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:46.742 16:22:20 -- nvmf/common.sh@163 -- # true 00:10:46.742 16:22:20 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:46.742 16:22:20 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:46.742 16:22:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:46.742 16:22:20 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:46.742 16:22:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:46.742 16:22:20 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:46.742 16:22:20 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:46.742 16:22:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:46.742 16:22:20 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:46.742 16:22:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:46.742 16:22:20 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:46.742 16:22:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:46.742 16:22:20 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:46.742 16:22:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:46.742 16:22:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:46.742 16:22:20 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:46.742 16:22:20 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:46.742 16:22:20 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:46.742 16:22:20 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:46.742 16:22:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:46.742 16:22:20 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:46.742 16:22:20 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:46.742 16:22:20 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:46.742 16:22:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:46.742 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:46.742 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:10:46.742 00:10:46.742 --- 10.0.0.2 ping statistics --- 00:10:46.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.742 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:10:46.742 16:22:20 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:46.742 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:46.742 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:10:46.742 00:10:46.742 --- 10.0.0.3 ping statistics --- 00:10:46.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.742 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:10:46.742 16:22:20 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:46.742 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:46.742 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:10:46.742 00:10:46.742 --- 10.0.0.1 ping statistics --- 00:10:46.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.742 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:10:46.742 16:22:20 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:46.742 16:22:20 -- nvmf/common.sh@422 -- # return 0 00:10:46.742 16:22:20 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:46.742 16:22:20 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:46.742 16:22:20 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:46.742 16:22:20 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:46.742 16:22:20 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:46.742 16:22:20 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:46.742 16:22:20 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:46.742 16:22:20 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:10:46.742 16:22:20 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:46.742 16:22:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:46.742 16:22:20 -- common/autotest_common.sh@10 -- # set +x 00:10:46.742 16:22:20 -- nvmf/common.sh@470 -- # nvmfpid=71639 00:10:46.742 16:22:20 -- nvmf/common.sh@471 -- # waitforlisten 71639 00:10:46.742 16:22:20 -- common/autotest_common.sh@817 -- # '[' -z 71639 ']' 00:10:46.742 16:22:20 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:46.742 16:22:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.742 16:22:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:46.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.742 16:22:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.742 16:22:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:46.742 16:22:20 -- common/autotest_common.sh@10 -- # set +x 00:10:47.003 [2024-04-17 16:22:20.804898] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:10:47.003 [2024-04-17 16:22:20.805068] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:47.003 [2024-04-17 16:22:20.944587] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:47.262 [2024-04-17 16:22:21.145862] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:47.262 [2024-04-17 16:22:21.145990] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:47.262 [2024-04-17 16:22:21.146007] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:47.262 [2024-04-17 16:22:21.146018] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:47.262 [2024-04-17 16:22:21.146028] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:47.262 [2024-04-17 16:22:21.147377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:47.262 [2024-04-17 16:22:21.147593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:47.262 [2024-04-17 16:22:21.147601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.829 16:22:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:47.829 16:22:21 -- common/autotest_common.sh@850 -- # return 0 00:10:47.829 16:22:21 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:47.829 16:22:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:47.829 16:22:21 -- common/autotest_common.sh@10 -- # set +x 00:10:47.829 16:22:21 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:47.829 16:22:21 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:48.086 [2024-04-17 16:22:22.085284] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:48.086 16:22:22 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:48.345 16:22:22 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:10:48.602 16:22:22 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:48.860 16:22:22 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:10:48.860 16:22:22 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:10:48.860 16:22:22 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:10:49.427 16:22:23 -- target/nvmf_lvol.sh@29 -- # lvs=19a07f3a-e353-44c1-aaae-d0ef8938f5d6 00:10:49.427 16:22:23 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 19a07f3a-e353-44c1-aaae-d0ef8938f5d6 lvol 20 00:10:49.427 16:22:23 -- target/nvmf_lvol.sh@32 -- # lvol=0ddfa952-0cb9-4035-93a1-b4a9a7ff1003 00:10:49.427 16:22:23 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:49.993 16:22:23 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0ddfa952-0cb9-4035-93a1-b4a9a7ff1003 00:10:50.251 16:22:24 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:50.509 [2024-04-17 16:22:24.356163] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:50.509 16:22:24 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:50.767 16:22:24 -- target/nvmf_lvol.sh@42 -- # perf_pid=71792 00:10:50.767 16:22:24 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:10:50.767 16:22:24 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:10:51.702 16:22:25 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 0ddfa952-0cb9-4035-93a1-b4a9a7ff1003 MY_SNAPSHOT 00:10:52.269 16:22:26 -- target/nvmf_lvol.sh@47 -- # snapshot=2878fd2e-da57-4fe9-bf42-64e999981d3b 00:10:52.269 16:22:26 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 0ddfa952-0cb9-4035-93a1-b4a9a7ff1003 30 00:10:52.527 16:22:26 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 2878fd2e-da57-4fe9-bf42-64e999981d3b MY_CLONE 00:10:52.785 16:22:26 -- target/nvmf_lvol.sh@49 -- # clone=bd6c4575-0540-41ed-b002-b4c8d8308fe1 00:10:52.785 16:22:26 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate bd6c4575-0540-41ed-b002-b4c8d8308fe1 00:10:53.772 16:22:27 -- target/nvmf_lvol.sh@53 -- # wait 71792 00:11:01.951 Initializing NVMe Controllers 00:11:01.951 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:01.951 Controller IO queue size 128, less than required. 00:11:01.951 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:01.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:01.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:01.951 Initialization complete. Launching workers. 00:11:01.951 ======================================================== 00:11:01.951 Latency(us) 00:11:01.951 Device Information : IOPS MiB/s Average min max 00:11:01.951 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9253.10 36.14 13833.37 1759.29 101551.14 00:11:01.951 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9118.80 35.62 14044.91 2470.32 71313.67 00:11:01.951 ======================================================== 00:11:01.951 Total : 18371.90 71.77 13938.37 1759.29 101551.14 00:11:01.951 00:11:01.951 16:22:34 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:01.951 16:22:35 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 0ddfa952-0cb9-4035-93a1-b4a9a7ff1003 00:11:01.951 16:22:35 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 19a07f3a-e353-44c1-aaae-d0ef8938f5d6 00:11:01.951 16:22:35 -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:01.951 16:22:35 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:01.951 16:22:35 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:01.951 16:22:35 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:01.951 16:22:35 -- nvmf/common.sh@117 -- # sync 00:11:01.951 16:22:35 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:01.951 16:22:35 -- nvmf/common.sh@120 -- # set +e 00:11:01.951 16:22:35 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:01.951 16:22:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:01.951 rmmod nvme_tcp 00:11:01.951 rmmod nvme_fabrics 00:11:01.951 rmmod nvme_keyring 00:11:01.951 16:22:35 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:01.951 16:22:35 -- nvmf/common.sh@124 -- # set -e 00:11:01.951 16:22:35 -- nvmf/common.sh@125 -- # return 0 00:11:01.951 16:22:35 -- nvmf/common.sh@478 -- # '[' -n 71639 ']' 00:11:01.951 16:22:35 -- nvmf/common.sh@479 -- # killprocess 71639 00:11:01.951 16:22:35 -- common/autotest_common.sh@936 -- # '[' -z 71639 ']' 00:11:01.951 16:22:35 -- common/autotest_common.sh@940 -- # kill -0 71639 00:11:01.951 16:22:35 -- common/autotest_common.sh@941 -- # uname 00:11:01.951 16:22:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:01.951 16:22:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71639 00:11:01.951 killing process with pid 71639 00:11:01.951 16:22:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:01.951 16:22:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:01.951 16:22:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71639' 00:11:01.951 16:22:35 -- common/autotest_common.sh@955 -- # kill 71639 00:11:01.951 16:22:35 -- common/autotest_common.sh@960 -- # wait 71639 00:11:02.518 16:22:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:02.518 16:22:36 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:02.518 16:22:36 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:02.518 16:22:36 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:02.518 16:22:36 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:02.518 16:22:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.518 16:22:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:02.518 16:22:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.518 16:22:36 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:02.518 ************************************ 00:11:02.518 END TEST nvmf_lvol 00:11:02.518 ************************************ 00:11:02.518 00:11:02.518 real 0m16.090s 00:11:02.518 user 1m6.420s 00:11:02.518 sys 0m4.168s 00:11:02.518 16:22:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:02.518 16:22:36 -- common/autotest_common.sh@10 -- # set +x 00:11:02.518 16:22:36 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:02.518 16:22:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:02.518 16:22:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:02.518 16:22:36 -- common/autotest_common.sh@10 -- # set +x 00:11:02.518 ************************************ 00:11:02.518 START TEST nvmf_lvs_grow 00:11:02.518 ************************************ 00:11:02.518 16:22:36 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:02.518 * Looking for test storage... 00:11:02.518 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:02.518 16:22:36 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:02.518 16:22:36 -- nvmf/common.sh@7 -- # uname -s 00:11:02.518 16:22:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.518 16:22:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.518 16:22:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.518 16:22:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.518 16:22:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:02.518 16:22:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:02.518 16:22:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.518 16:22:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:02.518 16:22:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.518 16:22:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.518 16:22:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:11:02.518 16:22:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:11:02.518 16:22:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.518 16:22:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.518 16:22:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:02.518 16:22:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:02.518 16:22:36 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:02.518 16:22:36 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.518 16:22:36 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.518 16:22:36 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.518 16:22:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.518 16:22:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.518 16:22:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.518 16:22:36 -- paths/export.sh@5 -- # export PATH 00:11:02.519 16:22:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.519 16:22:36 -- nvmf/common.sh@47 -- # : 0 00:11:02.519 16:22:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:02.519 16:22:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:02.519 16:22:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:02.519 16:22:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.519 16:22:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.519 16:22:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:02.519 16:22:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:02.519 16:22:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:02.519 16:22:36 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:02.519 16:22:36 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:02.519 16:22:36 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:11:02.519 16:22:36 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:02.519 16:22:36 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:02.519 16:22:36 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:02.519 16:22:36 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:02.519 16:22:36 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:02.519 16:22:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.519 16:22:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:02.519 16:22:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.519 16:22:36 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:11:02.519 16:22:36 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:11:02.519 16:22:36 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:11:02.519 16:22:36 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:11:02.519 16:22:36 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:11:02.519 16:22:36 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:11:02.519 16:22:36 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:02.519 16:22:36 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:02.519 16:22:36 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:02.519 16:22:36 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:02.519 16:22:36 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:02.519 16:22:36 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:02.519 16:22:36 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:02.519 16:22:36 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:02.519 16:22:36 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:02.519 16:22:36 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:02.519 16:22:36 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:02.519 16:22:36 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:02.519 16:22:36 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:02.778 16:22:36 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:02.778 Cannot find device "nvmf_tgt_br" 00:11:02.778 16:22:36 -- nvmf/common.sh@155 -- # true 00:11:02.778 16:22:36 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:02.778 Cannot find device "nvmf_tgt_br2" 00:11:02.778 16:22:36 -- nvmf/common.sh@156 -- # true 00:11:02.778 16:22:36 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:02.778 16:22:36 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:02.778 Cannot find device "nvmf_tgt_br" 00:11:02.778 16:22:36 -- nvmf/common.sh@158 -- # true 00:11:02.778 16:22:36 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:02.778 Cannot find device "nvmf_tgt_br2" 00:11:02.778 16:22:36 -- nvmf/common.sh@159 -- # true 00:11:02.778 16:22:36 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:02.778 16:22:36 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:02.778 16:22:36 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:02.778 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:02.778 16:22:36 -- nvmf/common.sh@162 -- # true 00:11:02.778 16:22:36 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:02.778 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:02.778 16:22:36 -- nvmf/common.sh@163 -- # true 00:11:02.778 16:22:36 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:02.778 16:22:36 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:02.778 16:22:36 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:02.778 16:22:36 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:02.778 16:22:36 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:02.778 16:22:36 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:02.778 16:22:36 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:02.778 16:22:36 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:02.778 16:22:36 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:02.778 16:22:36 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:02.778 16:22:36 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:02.778 16:22:36 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:02.778 16:22:36 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:02.778 16:22:36 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:02.778 16:22:36 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:02.778 16:22:36 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:02.778 16:22:36 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:02.778 16:22:36 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:03.037 16:22:36 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:03.037 16:22:36 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:03.037 16:22:36 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:03.037 16:22:36 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:03.037 16:22:36 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:03.037 16:22:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:03.037 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:03.037 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:11:03.037 00:11:03.037 --- 10.0.0.2 ping statistics --- 00:11:03.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.037 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:11:03.037 16:22:36 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:03.037 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:03.037 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:11:03.037 00:11:03.037 --- 10.0.0.3 ping statistics --- 00:11:03.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.037 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:11:03.037 16:22:36 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:03.037 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:03.037 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:11:03.037 00:11:03.037 --- 10.0.0.1 ping statistics --- 00:11:03.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.037 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:11:03.037 16:22:36 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:03.037 16:22:36 -- nvmf/common.sh@422 -- # return 0 00:11:03.037 16:22:36 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:03.037 16:22:36 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:03.037 16:22:36 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:03.037 16:22:36 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:03.037 16:22:36 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:03.037 16:22:36 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:03.037 16:22:36 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:03.037 16:22:36 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:11:03.037 16:22:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:03.037 16:22:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:03.037 16:22:36 -- common/autotest_common.sh@10 -- # set +x 00:11:03.037 16:22:36 -- nvmf/common.sh@470 -- # nvmfpid=72162 00:11:03.037 16:22:36 -- nvmf/common.sh@471 -- # waitforlisten 72162 00:11:03.037 16:22:36 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:03.037 16:22:36 -- common/autotest_common.sh@817 -- # '[' -z 72162 ']' 00:11:03.037 16:22:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.037 16:22:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:03.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.037 16:22:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.037 16:22:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:03.037 16:22:36 -- common/autotest_common.sh@10 -- # set +x 00:11:03.037 [2024-04-17 16:22:36.969603] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:11:03.037 [2024-04-17 16:22:36.969699] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:03.296 [2024-04-17 16:22:37.102033] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.296 [2024-04-17 16:22:37.237867] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:03.296 [2024-04-17 16:22:37.237933] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:03.296 [2024-04-17 16:22:37.237945] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:03.296 [2024-04-17 16:22:37.237953] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:03.296 [2024-04-17 16:22:37.237960] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:03.296 [2024-04-17 16:22:37.237989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.230 16:22:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:04.230 16:22:37 -- common/autotest_common.sh@850 -- # return 0 00:11:04.230 16:22:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:04.230 16:22:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:04.230 16:22:37 -- common/autotest_common.sh@10 -- # set +x 00:11:04.230 16:22:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:04.230 16:22:38 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:04.230 [2024-04-17 16:22:38.273140] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:04.489 16:22:38 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:11:04.489 16:22:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:04.489 16:22:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:04.489 16:22:38 -- common/autotest_common.sh@10 -- # set +x 00:11:04.489 ************************************ 00:11:04.489 START TEST lvs_grow_clean 00:11:04.489 ************************************ 00:11:04.489 16:22:38 -- common/autotest_common.sh@1111 -- # lvs_grow 00:11:04.489 16:22:38 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:04.489 16:22:38 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:04.489 16:22:38 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:04.489 16:22:38 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:04.489 16:22:38 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:04.489 16:22:38 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:04.489 16:22:38 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:04.489 16:22:38 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:04.489 16:22:38 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:04.747 16:22:38 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:04.747 16:22:38 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:05.006 16:22:38 -- target/nvmf_lvs_grow.sh@28 -- # lvs=385bc46d-b78d-4f19-a547-7eaec6264ee3 00:11:05.006 16:22:38 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:05.006 16:22:38 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 385bc46d-b78d-4f19-a547-7eaec6264ee3 00:11:05.265 16:22:39 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:05.265 16:22:39 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:05.265 16:22:39 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 385bc46d-b78d-4f19-a547-7eaec6264ee3 lvol 150 00:11:05.831 16:22:39 -- target/nvmf_lvs_grow.sh@33 -- # lvol=f0517664-5607-4447-b492-9609b364f0f4 00:11:05.831 16:22:39 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:05.831 16:22:39 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:05.831 [2024-04-17 16:22:39.806693] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:05.831 [2024-04-17 16:22:39.806798] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:05.831 true 00:11:05.831 16:22:39 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 385bc46d-b78d-4f19-a547-7eaec6264ee3 00:11:05.831 16:22:39 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:06.090 16:22:40 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:06.090 16:22:40 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:06.657 16:22:40 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f0517664-5607-4447-b492-9609b364f0f4 00:11:06.657 16:22:40 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:06.915 [2024-04-17 16:22:40.947339] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:07.176 16:22:40 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:07.434 16:22:41 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=72333 00:11:07.434 16:22:41 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:07.434 16:22:41 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:07.434 16:22:41 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 72333 /var/tmp/bdevperf.sock 00:11:07.434 16:22:41 -- common/autotest_common.sh@817 -- # '[' -z 72333 ']' 00:11:07.434 16:22:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:07.434 16:22:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:07.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:07.434 16:22:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:07.434 16:22:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:07.434 16:22:41 -- common/autotest_common.sh@10 -- # set +x 00:11:07.434 [2024-04-17 16:22:41.277065] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:11:07.434 [2024-04-17 16:22:41.277164] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72333 ] 00:11:07.434 [2024-04-17 16:22:41.412885] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.693 [2024-04-17 16:22:41.550535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.261 16:22:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:08.261 16:22:42 -- common/autotest_common.sh@850 -- # return 0 00:11:08.261 16:22:42 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:08.519 Nvme0n1 00:11:08.519 16:22:42 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:08.777 [ 00:11:08.777 { 00:11:08.777 "aliases": [ 00:11:08.777 "f0517664-5607-4447-b492-9609b364f0f4" 00:11:08.777 ], 00:11:08.777 "assigned_rate_limits": { 00:11:08.777 "r_mbytes_per_sec": 0, 00:11:08.777 "rw_ios_per_sec": 0, 00:11:08.777 "rw_mbytes_per_sec": 0, 00:11:08.777 "w_mbytes_per_sec": 0 00:11:08.777 }, 00:11:08.777 "block_size": 4096, 00:11:08.777 "claimed": false, 00:11:08.777 "driver_specific": { 00:11:08.777 "mp_policy": "active_passive", 00:11:08.777 "nvme": [ 00:11:08.777 { 00:11:08.777 "ctrlr_data": { 00:11:08.777 "ana_reporting": false, 00:11:08.777 "cntlid": 1, 00:11:08.777 "firmware_revision": "24.05", 00:11:08.777 "model_number": "SPDK bdev Controller", 00:11:08.777 "multi_ctrlr": true, 00:11:08.777 "oacs": { 00:11:08.777 "firmware": 0, 00:11:08.777 "format": 0, 00:11:08.777 "ns_manage": 0, 00:11:08.777 "security": 0 00:11:08.777 }, 00:11:08.777 "serial_number": "SPDK0", 00:11:08.777 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:08.777 "vendor_id": "0x8086" 00:11:08.777 }, 00:11:08.777 "ns_data": { 00:11:08.777 "can_share": true, 00:11:08.777 "id": 1 00:11:08.777 }, 00:11:08.777 "trid": { 00:11:08.777 "adrfam": "IPv4", 00:11:08.777 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:08.777 "traddr": "10.0.0.2", 00:11:08.777 "trsvcid": "4420", 00:11:08.777 "trtype": "TCP" 00:11:08.777 }, 00:11:08.777 "vs": { 00:11:08.777 "nvme_version": "1.3" 00:11:08.777 } 00:11:08.777 } 00:11:08.777 ] 00:11:08.777 }, 00:11:08.777 "memory_domains": [ 00:11:08.777 { 00:11:08.777 "dma_device_id": "system", 00:11:08.777 "dma_device_type": 1 00:11:08.777 } 00:11:08.777 ], 00:11:08.777 "name": "Nvme0n1", 00:11:08.777 "num_blocks": 38912, 00:11:08.777 "product_name": "NVMe disk", 00:11:08.777 "supported_io_types": { 00:11:08.777 "abort": true, 00:11:08.777 "compare": true, 00:11:08.777 "compare_and_write": true, 00:11:08.777 "flush": true, 00:11:08.777 "nvme_admin": true, 00:11:08.777 "nvme_io": true, 00:11:08.777 "read": true, 00:11:08.777 "reset": true, 00:11:08.777 "unmap": true, 00:11:08.777 "write": true, 00:11:08.777 "write_zeroes": true 00:11:08.777 }, 00:11:08.777 "uuid": "f0517664-5607-4447-b492-9609b364f0f4", 00:11:08.777 "zoned": false 00:11:08.777 } 00:11:08.777 ] 00:11:09.035 16:22:42 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=72381 00:11:09.035 16:22:42 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:09.035 16:22:42 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:09.035 Running I/O for 10 seconds... 00:11:09.971 Latency(us) 00:11:09.971 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:09.971 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:09.971 Nvme0n1 : 1.00 7428.00 29.02 0.00 0.00 0.00 0.00 0.00 00:11:09.971 =================================================================================================================== 00:11:09.971 Total : 7428.00 29.02 0.00 0.00 0.00 0.00 0.00 00:11:09.971 00:11:10.907 16:22:44 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 385bc46d-b78d-4f19-a547-7eaec6264ee3 00:11:10.907 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:10.907 Nvme0n1 : 2.00 7588.50 29.64 0.00 0.00 0.00 0.00 0.00 00:11:10.907 =================================================================================================================== 00:11:10.907 Total : 7588.50 29.64 0.00 0.00 0.00 0.00 0.00 00:11:10.907 00:11:11.165 true 00:11:11.165 16:22:45 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 385bc46d-b78d-4f19-a547-7eaec6264ee3 00:11:11.165 16:22:45 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:11.423 16:22:45 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:11.423 16:22:45 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:11.423 16:22:45 -- target/nvmf_lvs_grow.sh@65 -- # wait 72381 00:11:11.990 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:11.990 Nvme0n1 : 3.00 7681.00 30.00 0.00 0.00 0.00 0.00 0.00 00:11:11.990 =================================================================================================================== 00:11:11.990 Total : 7681.00 30.00 0.00 0.00 0.00 0.00 0.00 00:11:11.990 00:11:12.925 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:12.925 Nvme0n1 : 4.00 7723.50 30.17 0.00 0.00 0.00 0.00 0.00 00:11:12.925 =================================================================================================================== 00:11:12.925 Total : 7723.50 30.17 0.00 0.00 0.00 0.00 0.00 00:11:12.925 00:11:14.319 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:14.319 Nvme0n1 : 5.00 7694.60 30.06 0.00 0.00 0.00 0.00 0.00 00:11:14.319 =================================================================================================================== 00:11:14.319 Total : 7694.60 30.06 0.00 0.00 0.00 0.00 0.00 00:11:14.319 00:11:14.884 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:14.884 Nvme0n1 : 6.00 7664.17 29.94 0.00 0.00 0.00 0.00 0.00 00:11:14.884 =================================================================================================================== 00:11:14.884 Total : 7664.17 29.94 0.00 0.00 0.00 0.00 0.00 00:11:14.884 00:11:16.256 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:16.256 Nvme0n1 : 7.00 7616.57 29.75 0.00 0.00 0.00 0.00 0.00 00:11:16.256 =================================================================================================================== 00:11:16.256 Total : 7616.57 29.75 0.00 0.00 0.00 0.00 0.00 00:11:16.256 00:11:17.190 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:17.190 Nvme0n1 : 8.00 7559.50 29.53 0.00 0.00 0.00 0.00 0.00 00:11:17.190 =================================================================================================================== 00:11:17.190 Total : 7559.50 29.53 0.00 0.00 0.00 0.00 0.00 00:11:17.190 00:11:18.142 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:18.142 Nvme0n1 : 9.00 7506.33 29.32 0.00 0.00 0.00 0.00 0.00 00:11:18.142 =================================================================================================================== 00:11:18.142 Total : 7506.33 29.32 0.00 0.00 0.00 0.00 0.00 00:11:18.142 00:11:19.078 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:19.078 Nvme0n1 : 10.00 7497.60 29.29 0.00 0.00 0.00 0.00 0.00 00:11:19.078 =================================================================================================================== 00:11:19.078 Total : 7497.60 29.29 0.00 0.00 0.00 0.00 0.00 00:11:19.078 00:11:19.078 00:11:19.078 Latency(us) 00:11:19.078 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:19.078 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:19.078 Nvme0n1 : 10.02 7500.80 29.30 0.00 0.00 17052.11 7685.59 46709.29 00:11:19.078 =================================================================================================================== 00:11:19.078 Total : 7500.80 29.30 0.00 0.00 17052.11 7685.59 46709.29 00:11:19.078 0 00:11:19.078 16:22:52 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 72333 00:11:19.078 16:22:52 -- common/autotest_common.sh@936 -- # '[' -z 72333 ']' 00:11:19.078 16:22:52 -- common/autotest_common.sh@940 -- # kill -0 72333 00:11:19.078 16:22:52 -- common/autotest_common.sh@941 -- # uname 00:11:19.078 16:22:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:19.078 16:22:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72333 00:11:19.078 16:22:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:19.078 16:22:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:19.078 16:22:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72333' 00:11:19.078 killing process with pid 72333 00:11:19.078 16:22:52 -- common/autotest_common.sh@955 -- # kill 72333 00:11:19.078 Received shutdown signal, test time was about 10.000000 seconds 00:11:19.078 00:11:19.078 Latency(us) 00:11:19.078 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:19.078 =================================================================================================================== 00:11:19.078 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:19.078 16:22:52 -- common/autotest_common.sh@960 -- # wait 72333 00:11:19.336 16:22:53 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:19.594 16:22:53 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 385bc46d-b78d-4f19-a547-7eaec6264ee3 00:11:19.594 16:22:53 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:11:19.853 16:22:53 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:11:19.853 16:22:53 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:11:19.853 16:22:53 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:20.111 [2024-04-17 16:22:54.046501] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:20.111 16:22:54 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 385bc46d-b78d-4f19-a547-7eaec6264ee3 00:11:20.111 16:22:54 -- common/autotest_common.sh@638 -- # local es=0 00:11:20.111 16:22:54 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 385bc46d-b78d-4f19-a547-7eaec6264ee3 00:11:20.111 16:22:54 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:20.111 16:22:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:20.111 16:22:54 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:20.111 16:22:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:20.111 16:22:54 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:20.111 16:22:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:20.111 16:22:54 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:20.111 16:22:54 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:20.111 16:22:54 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 385bc46d-b78d-4f19-a547-7eaec6264ee3 00:11:20.369 2024/04/17 16:22:54 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:385bc46d-b78d-4f19-a547-7eaec6264ee3], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:11:20.369 request: 00:11:20.369 { 00:11:20.369 "method": "bdev_lvol_get_lvstores", 00:11:20.369 "params": { 00:11:20.369 "uuid": "385bc46d-b78d-4f19-a547-7eaec6264ee3" 00:11:20.369 } 00:11:20.369 } 00:11:20.369 Got JSON-RPC error response 00:11:20.369 GoRPCClient: error on JSON-RPC call 00:11:20.369 16:22:54 -- common/autotest_common.sh@641 -- # es=1 00:11:20.369 16:22:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:20.369 16:22:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:20.369 16:22:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:20.369 16:22:54 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:20.627 aio_bdev 00:11:20.627 16:22:54 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev f0517664-5607-4447-b492-9609b364f0f4 00:11:20.627 16:22:54 -- common/autotest_common.sh@885 -- # local bdev_name=f0517664-5607-4447-b492-9609b364f0f4 00:11:20.627 16:22:54 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:11:20.627 16:22:54 -- common/autotest_common.sh@887 -- # local i 00:11:20.627 16:22:54 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:11:20.627 16:22:54 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:11:20.627 16:22:54 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:20.886 16:22:54 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f0517664-5607-4447-b492-9609b364f0f4 -t 2000 00:11:21.145 [ 00:11:21.145 { 00:11:21.145 "aliases": [ 00:11:21.145 "lvs/lvol" 00:11:21.145 ], 00:11:21.145 "assigned_rate_limits": { 00:11:21.145 "r_mbytes_per_sec": 0, 00:11:21.145 "rw_ios_per_sec": 0, 00:11:21.145 "rw_mbytes_per_sec": 0, 00:11:21.145 "w_mbytes_per_sec": 0 00:11:21.145 }, 00:11:21.145 "block_size": 4096, 00:11:21.145 "claimed": false, 00:11:21.145 "driver_specific": { 00:11:21.145 "lvol": { 00:11:21.145 "base_bdev": "aio_bdev", 00:11:21.145 "clone": false, 00:11:21.145 "esnap_clone": false, 00:11:21.145 "lvol_store_uuid": "385bc46d-b78d-4f19-a547-7eaec6264ee3", 00:11:21.145 "snapshot": false, 00:11:21.145 "thin_provision": false 00:11:21.145 } 00:11:21.145 }, 00:11:21.145 "name": "f0517664-5607-4447-b492-9609b364f0f4", 00:11:21.145 "num_blocks": 38912, 00:11:21.145 "product_name": "Logical Volume", 00:11:21.145 "supported_io_types": { 00:11:21.145 "abort": false, 00:11:21.145 "compare": false, 00:11:21.145 "compare_and_write": false, 00:11:21.145 "flush": false, 00:11:21.145 "nvme_admin": false, 00:11:21.145 "nvme_io": false, 00:11:21.145 "read": true, 00:11:21.145 "reset": true, 00:11:21.145 "unmap": true, 00:11:21.145 "write": true, 00:11:21.145 "write_zeroes": true 00:11:21.145 }, 00:11:21.145 "uuid": "f0517664-5607-4447-b492-9609b364f0f4", 00:11:21.145 "zoned": false 00:11:21.145 } 00:11:21.145 ] 00:11:21.145 16:22:55 -- common/autotest_common.sh@893 -- # return 0 00:11:21.145 16:22:55 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 385bc46d-b78d-4f19-a547-7eaec6264ee3 00:11:21.145 16:22:55 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:11:21.712 16:22:55 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:11:21.712 16:22:55 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 385bc46d-b78d-4f19-a547-7eaec6264ee3 00:11:21.712 16:22:55 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:11:21.971 16:22:55 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:11:21.971 16:22:55 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete f0517664-5607-4447-b492-9609b364f0f4 00:11:22.229 16:22:56 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 385bc46d-b78d-4f19-a547-7eaec6264ee3 00:11:22.489 16:22:56 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:22.747 16:22:56 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:23.004 ************************************ 00:11:23.004 END TEST lvs_grow_clean 00:11:23.004 ************************************ 00:11:23.004 00:11:23.004 real 0m18.621s 00:11:23.004 user 0m17.798s 00:11:23.004 sys 0m2.309s 00:11:23.004 16:22:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:23.004 16:22:56 -- common/autotest_common.sh@10 -- # set +x 00:11:23.004 16:22:57 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:11:23.004 16:22:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:23.004 16:22:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:23.004 16:22:57 -- common/autotest_common.sh@10 -- # set +x 00:11:23.262 ************************************ 00:11:23.262 START TEST lvs_grow_dirty 00:11:23.262 ************************************ 00:11:23.262 16:22:57 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:11:23.262 16:22:57 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:23.262 16:22:57 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:23.262 16:22:57 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:23.262 16:22:57 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:23.262 16:22:57 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:23.262 16:22:57 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:23.262 16:22:57 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:23.262 16:22:57 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:23.262 16:22:57 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:23.520 16:22:57 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:23.520 16:22:57 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:23.778 16:22:57 -- target/nvmf_lvs_grow.sh@28 -- # lvs=8c26f8e6-16a1-4b81-a8a6-b906a64614ae 00:11:23.778 16:22:57 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c26f8e6-16a1-4b81-a8a6-b906a64614ae 00:11:23.778 16:22:57 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:24.036 16:22:57 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:24.036 16:22:57 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:24.036 16:22:57 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8c26f8e6-16a1-4b81-a8a6-b906a64614ae lvol 150 00:11:24.293 16:22:58 -- target/nvmf_lvs_grow.sh@33 -- # lvol=b53d6835-f53c-4fbe-8023-c96337c74d6d 00:11:24.293 16:22:58 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:24.294 16:22:58 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:24.552 [2024-04-17 16:22:58.517757] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:24.552 [2024-04-17 16:22:58.517864] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:24.552 true 00:11:24.552 16:22:58 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c26f8e6-16a1-4b81-a8a6-b906a64614ae 00:11:24.552 16:22:58 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:24.811 16:22:58 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:24.811 16:22:58 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:25.070 16:22:59 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b53d6835-f53c-4fbe-8023-c96337c74d6d 00:11:25.331 16:22:59 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:25.590 16:22:59 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:25.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:25.855 16:22:59 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=72785 00:11:25.855 16:22:59 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:25.855 16:22:59 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:25.855 16:22:59 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 72785 /var/tmp/bdevperf.sock 00:11:25.855 16:22:59 -- common/autotest_common.sh@817 -- # '[' -z 72785 ']' 00:11:25.855 16:22:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:25.855 16:22:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:25.855 16:22:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:25.855 16:22:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:25.855 16:22:59 -- common/autotest_common.sh@10 -- # set +x 00:11:25.855 [2024-04-17 16:22:59.875890] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:11:25.855 [2024-04-17 16:22:59.875988] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72785 ] 00:11:26.141 [2024-04-17 16:23:00.011980] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.141 [2024-04-17 16:23:00.140831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:27.072 16:23:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:27.072 16:23:00 -- common/autotest_common.sh@850 -- # return 0 00:11:27.072 16:23:00 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:27.072 Nvme0n1 00:11:27.330 16:23:01 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:27.330 [ 00:11:27.330 { 00:11:27.330 "aliases": [ 00:11:27.330 "b53d6835-f53c-4fbe-8023-c96337c74d6d" 00:11:27.330 ], 00:11:27.330 "assigned_rate_limits": { 00:11:27.330 "r_mbytes_per_sec": 0, 00:11:27.330 "rw_ios_per_sec": 0, 00:11:27.330 "rw_mbytes_per_sec": 0, 00:11:27.330 "w_mbytes_per_sec": 0 00:11:27.330 }, 00:11:27.330 "block_size": 4096, 00:11:27.330 "claimed": false, 00:11:27.330 "driver_specific": { 00:11:27.330 "mp_policy": "active_passive", 00:11:27.330 "nvme": [ 00:11:27.330 { 00:11:27.330 "ctrlr_data": { 00:11:27.330 "ana_reporting": false, 00:11:27.330 "cntlid": 1, 00:11:27.330 "firmware_revision": "24.05", 00:11:27.330 "model_number": "SPDK bdev Controller", 00:11:27.330 "multi_ctrlr": true, 00:11:27.330 "oacs": { 00:11:27.330 "firmware": 0, 00:11:27.330 "format": 0, 00:11:27.330 "ns_manage": 0, 00:11:27.330 "security": 0 00:11:27.330 }, 00:11:27.330 "serial_number": "SPDK0", 00:11:27.330 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:27.330 "vendor_id": "0x8086" 00:11:27.330 }, 00:11:27.330 "ns_data": { 00:11:27.330 "can_share": true, 00:11:27.330 "id": 1 00:11:27.330 }, 00:11:27.330 "trid": { 00:11:27.330 "adrfam": "IPv4", 00:11:27.330 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:27.330 "traddr": "10.0.0.2", 00:11:27.330 "trsvcid": "4420", 00:11:27.330 "trtype": "TCP" 00:11:27.330 }, 00:11:27.330 "vs": { 00:11:27.330 "nvme_version": "1.3" 00:11:27.330 } 00:11:27.330 } 00:11:27.330 ] 00:11:27.330 }, 00:11:27.330 "memory_domains": [ 00:11:27.330 { 00:11:27.330 "dma_device_id": "system", 00:11:27.330 "dma_device_type": 1 00:11:27.330 } 00:11:27.330 ], 00:11:27.330 "name": "Nvme0n1", 00:11:27.330 "num_blocks": 38912, 00:11:27.330 "product_name": "NVMe disk", 00:11:27.330 "supported_io_types": { 00:11:27.330 "abort": true, 00:11:27.330 "compare": true, 00:11:27.330 "compare_and_write": true, 00:11:27.330 "flush": true, 00:11:27.330 "nvme_admin": true, 00:11:27.330 "nvme_io": true, 00:11:27.330 "read": true, 00:11:27.330 "reset": true, 00:11:27.330 "unmap": true, 00:11:27.330 "write": true, 00:11:27.330 "write_zeroes": true 00:11:27.330 }, 00:11:27.330 "uuid": "b53d6835-f53c-4fbe-8023-c96337c74d6d", 00:11:27.330 "zoned": false 00:11:27.330 } 00:11:27.330 ] 00:11:27.330 16:23:01 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=72827 00:11:27.330 16:23:01 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:27.330 16:23:01 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:27.587 Running I/O for 10 seconds... 00:11:28.522 Latency(us) 00:11:28.522 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:28.522 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:28.522 Nvme0n1 : 1.00 7939.00 31.01 0.00 0.00 0.00 0.00 0.00 00:11:28.522 =================================================================================================================== 00:11:28.522 Total : 7939.00 31.01 0.00 0.00 0.00 0.00 0.00 00:11:28.522 00:11:29.455 16:23:03 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8c26f8e6-16a1-4b81-a8a6-b906a64614ae 00:11:29.455 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:29.455 Nvme0n1 : 2.00 7803.50 30.48 0.00 0.00 0.00 0.00 0.00 00:11:29.455 =================================================================================================================== 00:11:29.455 Total : 7803.50 30.48 0.00 0.00 0.00 0.00 0.00 00:11:29.455 00:11:29.713 true 00:11:29.713 16:23:03 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c26f8e6-16a1-4b81-a8a6-b906a64614ae 00:11:29.713 16:23:03 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:30.279 16:23:04 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:30.279 16:23:04 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:30.279 16:23:04 -- target/nvmf_lvs_grow.sh@65 -- # wait 72827 00:11:30.536 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:30.536 Nvme0n1 : 3.00 7859.00 30.70 0.00 0.00 0.00 0.00 0.00 00:11:30.536 =================================================================================================================== 00:11:30.536 Total : 7859.00 30.70 0.00 0.00 0.00 0.00 0.00 00:11:30.536 00:11:31.471 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:31.471 Nvme0n1 : 4.00 7857.75 30.69 0.00 0.00 0.00 0.00 0.00 00:11:31.471 =================================================================================================================== 00:11:31.471 Total : 7857.75 30.69 0.00 0.00 0.00 0.00 0.00 00:11:31.471 00:11:32.843 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:32.843 Nvme0n1 : 5.00 7838.20 30.62 0.00 0.00 0.00 0.00 0.00 00:11:32.843 =================================================================================================================== 00:11:32.843 Total : 7838.20 30.62 0.00 0.00 0.00 0.00 0.00 00:11:32.843 00:11:33.777 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:33.777 Nvme0n1 : 6.00 7838.00 30.62 0.00 0.00 0.00 0.00 0.00 00:11:33.777 =================================================================================================================== 00:11:33.777 Total : 7838.00 30.62 0.00 0.00 0.00 0.00 0.00 00:11:33.777 00:11:34.713 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:34.713 Nvme0n1 : 7.00 7647.14 29.87 0.00 0.00 0.00 0.00 0.00 00:11:34.713 =================================================================================================================== 00:11:34.713 Total : 7647.14 29.87 0.00 0.00 0.00 0.00 0.00 00:11:34.713 00:11:35.648 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:35.648 Nvme0n1 : 8.00 7592.75 29.66 0.00 0.00 0.00 0.00 0.00 00:11:35.648 =================================================================================================================== 00:11:35.648 Total : 7592.75 29.66 0.00 0.00 0.00 0.00 0.00 00:11:35.648 00:11:36.583 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:36.583 Nvme0n1 : 9.00 7562.89 29.54 0.00 0.00 0.00 0.00 0.00 00:11:36.583 =================================================================================================================== 00:11:36.583 Total : 7562.89 29.54 0.00 0.00 0.00 0.00 0.00 00:11:36.583 00:11:37.518 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:37.518 Nvme0n1 : 10.00 7553.00 29.50 0.00 0.00 0.00 0.00 0.00 00:11:37.518 =================================================================================================================== 00:11:37.518 Total : 7553.00 29.50 0.00 0.00 0.00 0.00 0.00 00:11:37.518 00:11:37.518 00:11:37.518 Latency(us) 00:11:37.518 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:37.518 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:37.518 Nvme0n1 : 10.01 7559.80 29.53 0.00 0.00 16925.79 7804.74 149660.39 00:11:37.518 =================================================================================================================== 00:11:37.518 Total : 7559.80 29.53 0.00 0.00 16925.79 7804.74 149660.39 00:11:37.518 0 00:11:37.518 16:23:11 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 72785 00:11:37.518 16:23:11 -- common/autotest_common.sh@936 -- # '[' -z 72785 ']' 00:11:37.518 16:23:11 -- common/autotest_common.sh@940 -- # kill -0 72785 00:11:37.518 16:23:11 -- common/autotest_common.sh@941 -- # uname 00:11:37.518 16:23:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:37.518 16:23:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72785 00:11:37.518 killing process with pid 72785 00:11:37.518 Received shutdown signal, test time was about 10.000000 seconds 00:11:37.518 00:11:37.518 Latency(us) 00:11:37.518 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:37.518 =================================================================================================================== 00:11:37.518 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:37.518 16:23:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:37.518 16:23:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:37.518 16:23:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72785' 00:11:37.518 16:23:11 -- common/autotest_common.sh@955 -- # kill 72785 00:11:37.518 16:23:11 -- common/autotest_common.sh@960 -- # wait 72785 00:11:38.085 16:23:11 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:38.343 16:23:12 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:11:38.344 16:23:12 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c26f8e6-16a1-4b81-a8a6-b906a64614ae 00:11:38.602 16:23:12 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:11:38.602 16:23:12 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:11:38.602 16:23:12 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 72162 00:11:38.602 16:23:12 -- target/nvmf_lvs_grow.sh@74 -- # wait 72162 00:11:38.602 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 72162 Killed "${NVMF_APP[@]}" "$@" 00:11:38.602 16:23:12 -- target/nvmf_lvs_grow.sh@74 -- # true 00:11:38.602 16:23:12 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:11:38.602 16:23:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:38.602 16:23:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:38.602 16:23:12 -- common/autotest_common.sh@10 -- # set +x 00:11:38.602 16:23:12 -- nvmf/common.sh@470 -- # nvmfpid=72983 00:11:38.602 16:23:12 -- nvmf/common.sh@471 -- # waitforlisten 72983 00:11:38.602 16:23:12 -- common/autotest_common.sh@817 -- # '[' -z 72983 ']' 00:11:38.602 16:23:12 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:38.602 16:23:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.602 16:23:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:38.602 16:23:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.602 16:23:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:38.602 16:23:12 -- common/autotest_common.sh@10 -- # set +x 00:11:38.602 [2024-04-17 16:23:12.558382] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:11:38.602 [2024-04-17 16:23:12.558486] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:38.860 [2024-04-17 16:23:12.696042] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.860 [2024-04-17 16:23:12.827144] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:38.860 [2024-04-17 16:23:12.827205] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:38.860 [2024-04-17 16:23:12.827217] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:38.860 [2024-04-17 16:23:12.827227] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:38.860 [2024-04-17 16:23:12.827235] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:38.860 [2024-04-17 16:23:12.827263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.797 16:23:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:39.797 16:23:13 -- common/autotest_common.sh@850 -- # return 0 00:11:39.797 16:23:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:39.797 16:23:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:39.797 16:23:13 -- common/autotest_common.sh@10 -- # set +x 00:11:39.797 16:23:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:39.797 16:23:13 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:39.797 [2024-04-17 16:23:13.788454] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:11:39.797 [2024-04-17 16:23:13.788851] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:11:39.797 [2024-04-17 16:23:13.789059] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:11:39.797 16:23:13 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:11:39.797 16:23:13 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev b53d6835-f53c-4fbe-8023-c96337c74d6d 00:11:39.797 16:23:13 -- common/autotest_common.sh@885 -- # local bdev_name=b53d6835-f53c-4fbe-8023-c96337c74d6d 00:11:39.797 16:23:13 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:11:39.797 16:23:13 -- common/autotest_common.sh@887 -- # local i 00:11:39.797 16:23:13 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:11:39.797 16:23:13 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:11:39.797 16:23:13 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:40.363 16:23:14 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b53d6835-f53c-4fbe-8023-c96337c74d6d -t 2000 00:11:40.621 [ 00:11:40.621 { 00:11:40.621 "aliases": [ 00:11:40.621 "lvs/lvol" 00:11:40.621 ], 00:11:40.621 "assigned_rate_limits": { 00:11:40.621 "r_mbytes_per_sec": 0, 00:11:40.621 "rw_ios_per_sec": 0, 00:11:40.621 "rw_mbytes_per_sec": 0, 00:11:40.621 "w_mbytes_per_sec": 0 00:11:40.621 }, 00:11:40.621 "block_size": 4096, 00:11:40.621 "claimed": false, 00:11:40.622 "driver_specific": { 00:11:40.622 "lvol": { 00:11:40.622 "base_bdev": "aio_bdev", 00:11:40.622 "clone": false, 00:11:40.622 "esnap_clone": false, 00:11:40.622 "lvol_store_uuid": "8c26f8e6-16a1-4b81-a8a6-b906a64614ae", 00:11:40.622 "snapshot": false, 00:11:40.622 "thin_provision": false 00:11:40.622 } 00:11:40.622 }, 00:11:40.622 "name": "b53d6835-f53c-4fbe-8023-c96337c74d6d", 00:11:40.622 "num_blocks": 38912, 00:11:40.622 "product_name": "Logical Volume", 00:11:40.622 "supported_io_types": { 00:11:40.622 "abort": false, 00:11:40.622 "compare": false, 00:11:40.622 "compare_and_write": false, 00:11:40.622 "flush": false, 00:11:40.622 "nvme_admin": false, 00:11:40.622 "nvme_io": false, 00:11:40.622 "read": true, 00:11:40.622 "reset": true, 00:11:40.622 "unmap": true, 00:11:40.622 "write": true, 00:11:40.622 "write_zeroes": true 00:11:40.622 }, 00:11:40.622 "uuid": "b53d6835-f53c-4fbe-8023-c96337c74d6d", 00:11:40.622 "zoned": false 00:11:40.622 } 00:11:40.622 ] 00:11:40.622 16:23:14 -- common/autotest_common.sh@893 -- # return 0 00:11:40.622 16:23:14 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c26f8e6-16a1-4b81-a8a6-b906a64614ae 00:11:40.622 16:23:14 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:11:40.880 16:23:14 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:11:40.880 16:23:14 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c26f8e6-16a1-4b81-a8a6-b906a64614ae 00:11:40.880 16:23:14 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:11:41.139 16:23:14 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:11:41.139 16:23:14 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:41.397 [2024-04-17 16:23:15.237870] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:41.397 16:23:15 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c26f8e6-16a1-4b81-a8a6-b906a64614ae 00:11:41.397 16:23:15 -- common/autotest_common.sh@638 -- # local es=0 00:11:41.397 16:23:15 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c26f8e6-16a1-4b81-a8a6-b906a64614ae 00:11:41.397 16:23:15 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:41.397 16:23:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:41.397 16:23:15 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:41.397 16:23:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:41.397 16:23:15 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:41.397 16:23:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:41.397 16:23:15 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:41.397 16:23:15 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:41.397 16:23:15 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c26f8e6-16a1-4b81-a8a6-b906a64614ae 00:11:41.655 2024/04/17 16:23:15 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:8c26f8e6-16a1-4b81-a8a6-b906a64614ae], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:11:41.655 request: 00:11:41.655 { 00:11:41.655 "method": "bdev_lvol_get_lvstores", 00:11:41.655 "params": { 00:11:41.655 "uuid": "8c26f8e6-16a1-4b81-a8a6-b906a64614ae" 00:11:41.655 } 00:11:41.655 } 00:11:41.655 Got JSON-RPC error response 00:11:41.655 GoRPCClient: error on JSON-RPC call 00:11:41.655 16:23:15 -- common/autotest_common.sh@641 -- # es=1 00:11:41.655 16:23:15 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:41.655 16:23:15 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:41.655 16:23:15 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:41.655 16:23:15 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:41.914 aio_bdev 00:11:41.914 16:23:15 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev b53d6835-f53c-4fbe-8023-c96337c74d6d 00:11:41.914 16:23:15 -- common/autotest_common.sh@885 -- # local bdev_name=b53d6835-f53c-4fbe-8023-c96337c74d6d 00:11:41.914 16:23:15 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:11:41.914 16:23:15 -- common/autotest_common.sh@887 -- # local i 00:11:41.914 16:23:15 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:11:41.914 16:23:15 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:11:41.914 16:23:15 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:42.172 16:23:16 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b53d6835-f53c-4fbe-8023-c96337c74d6d -t 2000 00:11:42.431 [ 00:11:42.431 { 00:11:42.431 "aliases": [ 00:11:42.431 "lvs/lvol" 00:11:42.431 ], 00:11:42.431 "assigned_rate_limits": { 00:11:42.431 "r_mbytes_per_sec": 0, 00:11:42.431 "rw_ios_per_sec": 0, 00:11:42.431 "rw_mbytes_per_sec": 0, 00:11:42.431 "w_mbytes_per_sec": 0 00:11:42.431 }, 00:11:42.431 "block_size": 4096, 00:11:42.431 "claimed": false, 00:11:42.431 "driver_specific": { 00:11:42.431 "lvol": { 00:11:42.431 "base_bdev": "aio_bdev", 00:11:42.431 "clone": false, 00:11:42.431 "esnap_clone": false, 00:11:42.431 "lvol_store_uuid": "8c26f8e6-16a1-4b81-a8a6-b906a64614ae", 00:11:42.431 "snapshot": false, 00:11:42.431 "thin_provision": false 00:11:42.431 } 00:11:42.431 }, 00:11:42.431 "name": "b53d6835-f53c-4fbe-8023-c96337c74d6d", 00:11:42.431 "num_blocks": 38912, 00:11:42.431 "product_name": "Logical Volume", 00:11:42.431 "supported_io_types": { 00:11:42.431 "abort": false, 00:11:42.431 "compare": false, 00:11:42.431 "compare_and_write": false, 00:11:42.431 "flush": false, 00:11:42.431 "nvme_admin": false, 00:11:42.431 "nvme_io": false, 00:11:42.431 "read": true, 00:11:42.431 "reset": true, 00:11:42.431 "unmap": true, 00:11:42.431 "write": true, 00:11:42.431 "write_zeroes": true 00:11:42.431 }, 00:11:42.431 "uuid": "b53d6835-f53c-4fbe-8023-c96337c74d6d", 00:11:42.431 "zoned": false 00:11:42.431 } 00:11:42.431 ] 00:11:42.431 16:23:16 -- common/autotest_common.sh@893 -- # return 0 00:11:42.431 16:23:16 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:11:42.431 16:23:16 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c26f8e6-16a1-4b81-a8a6-b906a64614ae 00:11:42.690 16:23:16 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:11:42.690 16:23:16 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c26f8e6-16a1-4b81-a8a6-b906a64614ae 00:11:42.690 16:23:16 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:11:42.947 16:23:16 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:11:42.947 16:23:16 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b53d6835-f53c-4fbe-8023-c96337c74d6d 00:11:43.205 16:23:17 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8c26f8e6-16a1-4b81-a8a6-b906a64614ae 00:11:43.463 16:23:17 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:43.721 16:23:17 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:44.287 ************************************ 00:11:44.287 END TEST lvs_grow_dirty 00:11:44.287 ************************************ 00:11:44.287 00:11:44.287 real 0m20.974s 00:11:44.287 user 0m43.719s 00:11:44.287 sys 0m7.874s 00:11:44.287 16:23:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:44.287 16:23:18 -- common/autotest_common.sh@10 -- # set +x 00:11:44.287 16:23:18 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:11:44.287 16:23:18 -- common/autotest_common.sh@794 -- # type=--id 00:11:44.287 16:23:18 -- common/autotest_common.sh@795 -- # id=0 00:11:44.287 16:23:18 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:11:44.287 16:23:18 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:44.287 16:23:18 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:11:44.287 16:23:18 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:11:44.287 16:23:18 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:11:44.287 16:23:18 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:44.287 nvmf_trace.0 00:11:44.287 16:23:18 -- common/autotest_common.sh@809 -- # return 0 00:11:44.287 16:23:18 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:11:44.287 16:23:18 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:44.287 16:23:18 -- nvmf/common.sh@117 -- # sync 00:11:44.287 16:23:18 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:44.287 16:23:18 -- nvmf/common.sh@120 -- # set +e 00:11:44.287 16:23:18 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:44.287 16:23:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:44.287 rmmod nvme_tcp 00:11:44.287 rmmod nvme_fabrics 00:11:44.287 rmmod nvme_keyring 00:11:44.287 16:23:18 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:44.287 16:23:18 -- nvmf/common.sh@124 -- # set -e 00:11:44.287 16:23:18 -- nvmf/common.sh@125 -- # return 0 00:11:44.287 16:23:18 -- nvmf/common.sh@478 -- # '[' -n 72983 ']' 00:11:44.287 16:23:18 -- nvmf/common.sh@479 -- # killprocess 72983 00:11:44.287 16:23:18 -- common/autotest_common.sh@936 -- # '[' -z 72983 ']' 00:11:44.287 16:23:18 -- common/autotest_common.sh@940 -- # kill -0 72983 00:11:44.287 16:23:18 -- common/autotest_common.sh@941 -- # uname 00:11:44.287 16:23:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:44.287 16:23:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72983 00:11:44.287 16:23:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:44.287 16:23:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:44.287 killing process with pid 72983 00:11:44.287 16:23:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72983' 00:11:44.287 16:23:18 -- common/autotest_common.sh@955 -- # kill 72983 00:11:44.287 16:23:18 -- common/autotest_common.sh@960 -- # wait 72983 00:11:44.546 16:23:18 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:44.546 16:23:18 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:44.546 16:23:18 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:44.546 16:23:18 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:44.546 16:23:18 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:44.546 16:23:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:44.546 16:23:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:44.546 16:23:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.804 16:23:18 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:44.804 00:11:44.804 real 0m42.174s 00:11:44.804 user 1m8.202s 00:11:44.804 sys 0m10.943s 00:11:44.804 16:23:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:44.804 ************************************ 00:11:44.804 END TEST nvmf_lvs_grow 00:11:44.804 16:23:18 -- common/autotest_common.sh@10 -- # set +x 00:11:44.804 ************************************ 00:11:44.804 16:23:18 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:44.804 16:23:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:44.804 16:23:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:44.804 16:23:18 -- common/autotest_common.sh@10 -- # set +x 00:11:44.804 ************************************ 00:11:44.804 START TEST nvmf_bdev_io_wait 00:11:44.804 ************************************ 00:11:44.804 16:23:18 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:44.804 * Looking for test storage... 00:11:44.804 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:44.804 16:23:18 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:44.804 16:23:18 -- nvmf/common.sh@7 -- # uname -s 00:11:44.804 16:23:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:44.804 16:23:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:44.804 16:23:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:44.804 16:23:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:44.804 16:23:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:44.804 16:23:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:44.804 16:23:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:44.804 16:23:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:44.804 16:23:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:44.804 16:23:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:44.804 16:23:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:11:44.804 16:23:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:11:44.804 16:23:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:44.804 16:23:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:44.804 16:23:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:44.804 16:23:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:44.804 16:23:18 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:44.804 16:23:18 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:44.804 16:23:18 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:44.804 16:23:18 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:44.804 16:23:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.804 16:23:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.804 16:23:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.804 16:23:18 -- paths/export.sh@5 -- # export PATH 00:11:44.804 16:23:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.804 16:23:18 -- nvmf/common.sh@47 -- # : 0 00:11:44.804 16:23:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:44.804 16:23:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:44.804 16:23:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:44.804 16:23:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:44.804 16:23:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:44.804 16:23:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:44.804 16:23:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:44.804 16:23:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:44.804 16:23:18 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:44.805 16:23:18 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:44.805 16:23:18 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:11:44.805 16:23:18 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:44.805 16:23:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:44.805 16:23:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:44.805 16:23:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:44.805 16:23:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:44.805 16:23:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:44.805 16:23:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:44.805 16:23:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.805 16:23:18 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:11:44.805 16:23:18 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:11:44.805 16:23:18 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:11:44.805 16:23:18 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:11:44.805 16:23:18 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:11:44.805 16:23:18 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:11:44.805 16:23:18 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:44.805 16:23:18 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:44.805 16:23:18 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:44.805 16:23:18 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:44.805 16:23:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:44.805 16:23:18 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:44.805 16:23:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:44.805 16:23:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:44.805 16:23:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:44.805 16:23:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:44.805 16:23:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:44.805 16:23:18 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:44.805 16:23:18 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:45.062 16:23:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:45.062 Cannot find device "nvmf_tgt_br" 00:11:45.063 16:23:18 -- nvmf/common.sh@155 -- # true 00:11:45.063 16:23:18 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:45.063 Cannot find device "nvmf_tgt_br2" 00:11:45.063 16:23:18 -- nvmf/common.sh@156 -- # true 00:11:45.063 16:23:18 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:45.063 16:23:18 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:45.063 Cannot find device "nvmf_tgt_br" 00:11:45.063 16:23:18 -- nvmf/common.sh@158 -- # true 00:11:45.063 16:23:18 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:45.063 Cannot find device "nvmf_tgt_br2" 00:11:45.063 16:23:18 -- nvmf/common.sh@159 -- # true 00:11:45.063 16:23:18 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:45.063 16:23:18 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:45.063 16:23:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:45.063 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:45.063 16:23:18 -- nvmf/common.sh@162 -- # true 00:11:45.063 16:23:18 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:45.063 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:45.063 16:23:18 -- nvmf/common.sh@163 -- # true 00:11:45.063 16:23:18 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:45.063 16:23:18 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:45.063 16:23:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:45.063 16:23:18 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:45.063 16:23:18 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:45.063 16:23:19 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:45.063 16:23:19 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:45.063 16:23:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:45.063 16:23:19 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:45.063 16:23:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:45.063 16:23:19 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:45.063 16:23:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:45.063 16:23:19 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:45.063 16:23:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:45.063 16:23:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:45.063 16:23:19 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:45.063 16:23:19 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:45.063 16:23:19 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:45.063 16:23:19 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:45.063 16:23:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:45.063 16:23:19 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:45.320 16:23:19 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:45.320 16:23:19 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:45.320 16:23:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:45.320 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:45.320 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:11:45.320 00:11:45.320 --- 10.0.0.2 ping statistics --- 00:11:45.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:45.320 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:11:45.320 16:23:19 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:45.320 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:45.320 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:11:45.320 00:11:45.320 --- 10.0.0.3 ping statistics --- 00:11:45.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:45.321 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:11:45.321 16:23:19 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:45.321 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:45.321 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:11:45.321 00:11:45.321 --- 10.0.0.1 ping statistics --- 00:11:45.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:45.321 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:11:45.321 16:23:19 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:45.321 16:23:19 -- nvmf/common.sh@422 -- # return 0 00:11:45.321 16:23:19 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:45.321 16:23:19 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:45.321 16:23:19 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:45.321 16:23:19 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:45.321 16:23:19 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:45.321 16:23:19 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:45.321 16:23:19 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:45.321 16:23:19 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:11:45.321 16:23:19 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:45.321 16:23:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:45.321 16:23:19 -- common/autotest_common.sh@10 -- # set +x 00:11:45.321 16:23:19 -- nvmf/common.sh@470 -- # nvmfpid=73412 00:11:45.321 16:23:19 -- nvmf/common.sh@471 -- # waitforlisten 73412 00:11:45.321 16:23:19 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:11:45.321 16:23:19 -- common/autotest_common.sh@817 -- # '[' -z 73412 ']' 00:11:45.321 16:23:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.321 16:23:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:45.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:45.321 16:23:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.321 16:23:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:45.321 16:23:19 -- common/autotest_common.sh@10 -- # set +x 00:11:45.321 [2024-04-17 16:23:19.225158] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:11:45.321 [2024-04-17 16:23:19.225258] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:45.321 [2024-04-17 16:23:19.364608] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:45.579 [2024-04-17 16:23:19.487417] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:45.579 [2024-04-17 16:23:19.487483] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:45.579 [2024-04-17 16:23:19.487496] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:45.579 [2024-04-17 16:23:19.487505] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:45.579 [2024-04-17 16:23:19.487512] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:45.579 [2024-04-17 16:23:19.487674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:45.579 [2024-04-17 16:23:19.487758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:45.579 [2024-04-17 16:23:19.488167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.579 [2024-04-17 16:23:19.488241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:46.143 16:23:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:46.143 16:23:20 -- common/autotest_common.sh@850 -- # return 0 00:11:46.143 16:23:20 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:46.143 16:23:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:46.143 16:23:20 -- common/autotest_common.sh@10 -- # set +x 00:11:46.401 16:23:20 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:46.401 16:23:20 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:11:46.401 16:23:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:46.401 16:23:20 -- common/autotest_common.sh@10 -- # set +x 00:11:46.401 16:23:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:46.401 16:23:20 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:11:46.401 16:23:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:46.401 16:23:20 -- common/autotest_common.sh@10 -- # set +x 00:11:46.401 16:23:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:46.401 16:23:20 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:46.401 16:23:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:46.401 16:23:20 -- common/autotest_common.sh@10 -- # set +x 00:11:46.401 [2024-04-17 16:23:20.308757] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:46.401 16:23:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:46.401 16:23:20 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:46.401 16:23:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:46.401 16:23:20 -- common/autotest_common.sh@10 -- # set +x 00:11:46.401 Malloc0 00:11:46.401 16:23:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:46.401 16:23:20 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:46.401 16:23:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:46.401 16:23:20 -- common/autotest_common.sh@10 -- # set +x 00:11:46.401 16:23:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:46.401 16:23:20 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:46.401 16:23:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:46.401 16:23:20 -- common/autotest_common.sh@10 -- # set +x 00:11:46.401 16:23:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:46.401 16:23:20 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:46.401 16:23:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:46.401 16:23:20 -- common/autotest_common.sh@10 -- # set +x 00:11:46.401 [2024-04-17 16:23:20.363857] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.401 16:23:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:46.401 16:23:20 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=73465 00:11:46.401 16:23:20 -- target/bdev_io_wait.sh@30 -- # READ_PID=73467 00:11:46.401 16:23:20 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:11:46.401 16:23:20 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:11:46.401 16:23:20 -- nvmf/common.sh@521 -- # config=() 00:11:46.401 16:23:20 -- nvmf/common.sh@521 -- # local subsystem config 00:11:46.401 16:23:20 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:11:46.401 16:23:20 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:11:46.401 16:23:20 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=73469 00:11:46.401 16:23:20 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:11:46.401 16:23:20 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:11:46.401 { 00:11:46.401 "params": { 00:11:46.401 "name": "Nvme$subsystem", 00:11:46.401 "trtype": "$TEST_TRANSPORT", 00:11:46.401 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:46.401 "adrfam": "ipv4", 00:11:46.401 "trsvcid": "$NVMF_PORT", 00:11:46.401 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:46.401 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:46.401 "hdgst": ${hdgst:-false}, 00:11:46.401 "ddgst": ${ddgst:-false} 00:11:46.401 }, 00:11:46.401 "method": "bdev_nvme_attach_controller" 00:11:46.401 } 00:11:46.401 EOF 00:11:46.401 )") 00:11:46.401 16:23:20 -- nvmf/common.sh@521 -- # config=() 00:11:46.401 16:23:20 -- nvmf/common.sh@521 -- # local subsystem config 00:11:46.401 16:23:20 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:11:46.401 16:23:20 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:11:46.401 { 00:11:46.401 "params": { 00:11:46.401 "name": "Nvme$subsystem", 00:11:46.401 "trtype": "$TEST_TRANSPORT", 00:11:46.401 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:46.401 "adrfam": "ipv4", 00:11:46.401 "trsvcid": "$NVMF_PORT", 00:11:46.401 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:46.401 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:46.401 "hdgst": ${hdgst:-false}, 00:11:46.401 "ddgst": ${ddgst:-false} 00:11:46.401 }, 00:11:46.401 "method": "bdev_nvme_attach_controller" 00:11:46.401 } 00:11:46.401 EOF 00:11:46.401 )") 00:11:46.401 16:23:20 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:11:46.401 16:23:20 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=73471 00:11:46.401 16:23:20 -- target/bdev_io_wait.sh@35 -- # sync 00:11:46.401 16:23:20 -- nvmf/common.sh@543 -- # cat 00:11:46.401 16:23:20 -- nvmf/common.sh@543 -- # cat 00:11:46.401 16:23:20 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:11:46.401 16:23:20 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:11:46.401 16:23:20 -- nvmf/common.sh@521 -- # config=() 00:11:46.401 16:23:20 -- nvmf/common.sh@521 -- # local subsystem config 00:11:46.401 16:23:20 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:11:46.401 16:23:20 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:11:46.401 { 00:11:46.401 "params": { 00:11:46.401 "name": "Nvme$subsystem", 00:11:46.401 "trtype": "$TEST_TRANSPORT", 00:11:46.401 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:46.401 "adrfam": "ipv4", 00:11:46.401 "trsvcid": "$NVMF_PORT", 00:11:46.401 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:46.401 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:46.401 "hdgst": ${hdgst:-false}, 00:11:46.401 "ddgst": ${ddgst:-false} 00:11:46.401 }, 00:11:46.401 "method": "bdev_nvme_attach_controller" 00:11:46.401 } 00:11:46.401 EOF 00:11:46.401 )") 00:11:46.401 16:23:20 -- nvmf/common.sh@545 -- # jq . 00:11:46.401 16:23:20 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:11:46.401 16:23:20 -- nvmf/common.sh@521 -- # config=() 00:11:46.401 16:23:20 -- nvmf/common.sh@545 -- # jq . 00:11:46.401 16:23:20 -- nvmf/common.sh@521 -- # local subsystem config 00:11:46.401 16:23:20 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:11:46.401 16:23:20 -- nvmf/common.sh@543 -- # cat 00:11:46.401 16:23:20 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:11:46.401 { 00:11:46.401 "params": { 00:11:46.401 "name": "Nvme$subsystem", 00:11:46.401 "trtype": "$TEST_TRANSPORT", 00:11:46.401 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:46.401 "adrfam": "ipv4", 00:11:46.401 "trsvcid": "$NVMF_PORT", 00:11:46.401 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:46.401 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:46.401 "hdgst": ${hdgst:-false}, 00:11:46.401 "ddgst": ${ddgst:-false} 00:11:46.401 }, 00:11:46.401 "method": "bdev_nvme_attach_controller" 00:11:46.401 } 00:11:46.401 EOF 00:11:46.401 )") 00:11:46.401 16:23:20 -- nvmf/common.sh@546 -- # IFS=, 00:11:46.401 16:23:20 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:11:46.401 "params": { 00:11:46.401 "name": "Nvme1", 00:11:46.401 "trtype": "tcp", 00:11:46.401 "traddr": "10.0.0.2", 00:11:46.401 "adrfam": "ipv4", 00:11:46.401 "trsvcid": "4420", 00:11:46.401 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:46.401 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:46.402 "hdgst": false, 00:11:46.402 "ddgst": false 00:11:46.402 }, 00:11:46.402 "method": "bdev_nvme_attach_controller" 00:11:46.402 }' 00:11:46.402 16:23:20 -- nvmf/common.sh@546 -- # IFS=, 00:11:46.402 16:23:20 -- nvmf/common.sh@543 -- # cat 00:11:46.402 16:23:20 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:11:46.402 "params": { 00:11:46.402 "name": "Nvme1", 00:11:46.402 "trtype": "tcp", 00:11:46.402 "traddr": "10.0.0.2", 00:11:46.402 "adrfam": "ipv4", 00:11:46.402 "trsvcid": "4420", 00:11:46.402 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:46.402 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:46.402 "hdgst": false, 00:11:46.402 "ddgst": false 00:11:46.402 }, 00:11:46.402 "method": "bdev_nvme_attach_controller" 00:11:46.402 }' 00:11:46.402 16:23:20 -- nvmf/common.sh@545 -- # jq . 00:11:46.402 16:23:20 -- nvmf/common.sh@546 -- # IFS=, 00:11:46.402 16:23:20 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:11:46.402 "params": { 00:11:46.402 "name": "Nvme1", 00:11:46.402 "trtype": "tcp", 00:11:46.402 "traddr": "10.0.0.2", 00:11:46.402 "adrfam": "ipv4", 00:11:46.402 "trsvcid": "4420", 00:11:46.402 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:46.402 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:46.402 "hdgst": false, 00:11:46.402 "ddgst": false 00:11:46.402 }, 00:11:46.402 "method": "bdev_nvme_attach_controller" 00:11:46.402 }' 00:11:46.402 16:23:20 -- nvmf/common.sh@545 -- # jq . 00:11:46.402 16:23:20 -- nvmf/common.sh@546 -- # IFS=, 00:11:46.402 16:23:20 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:11:46.402 "params": { 00:11:46.402 "name": "Nvme1", 00:11:46.402 "trtype": "tcp", 00:11:46.402 "traddr": "10.0.0.2", 00:11:46.402 "adrfam": "ipv4", 00:11:46.402 "trsvcid": "4420", 00:11:46.402 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:46.402 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:46.402 "hdgst": false, 00:11:46.402 "ddgst": false 00:11:46.402 }, 00:11:46.402 "method": "bdev_nvme_attach_controller" 00:11:46.402 }' 00:11:46.402 [2024-04-17 16:23:20.424433] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:11:46.402 [2024-04-17 16:23:20.424527] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:11:46.402 16:23:20 -- target/bdev_io_wait.sh@37 -- # wait 73465 00:11:46.661 [2024-04-17 16:23:20.445418] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:11:46.661 [2024-04-17 16:23:20.445695] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:11:46.661 [2024-04-17 16:23:20.457380] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:11:46.661 [2024-04-17 16:23:20.457470] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:11:46.661 [2024-04-17 16:23:20.472624] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:11:46.661 [2024-04-17 16:23:20.472758] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:11:46.661 [2024-04-17 16:23:20.645654] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.919 [2024-04-17 16:23:20.715471] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.919 [2024-04-17 16:23:20.755269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:11:46.919 [2024-04-17 16:23:20.764144] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:11:46.919 [2024-04-17 16:23:20.795475] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.919 [2024-04-17 16:23:20.821394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:11:46.919 [2024-04-17 16:23:20.830241] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:11:46.919 [2024-04-17 16:23:20.879485] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.919 [2024-04-17 16:23:20.900434] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:11:46.919 [2024-04-17 16:23:20.901126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:11:46.920 Running I/O for 1 seconds... 00:11:46.920 [2024-04-17 16:23:20.909939] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:11:47.177 [2024-04-17 16:23:20.974020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:11:47.177 [2024-04-17 16:23:20.978735] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:11:47.177 Running I/O for 1 seconds... 00:11:47.177 [2024-04-17 16:23:20.982837] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:11:47.177 [2024-04-17 16:23:21.063628] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:11:47.177 Running I/O for 1 seconds... 00:11:47.177 [2024-04-17 16:23:21.114569] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:11:47.177 Running I/O for 1 seconds... 00:11:48.180 00:11:48.180 Latency(us) 00:11:48.180 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:48.180 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:11:48.180 Nvme1n1 : 1.02 6416.02 25.06 0.00 0.00 19823.50 8043.05 34555.35 00:11:48.180 =================================================================================================================== 00:11:48.180 Total : 6416.02 25.06 0.00 0.00 19823.50 8043.05 34555.35 00:11:48.180 00:11:48.180 Latency(us) 00:11:48.180 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:48.180 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:11:48.180 Nvme1n1 : 1.00 197958.28 773.27 0.00 0.00 644.05 264.38 1050.07 00:11:48.180 =================================================================================================================== 00:11:48.180 Total : 197958.28 773.27 0.00 0.00 644.05 264.38 1050.07 00:11:48.180 00:11:48.180 Latency(us) 00:11:48.181 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:48.181 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:11:48.181 Nvme1n1 : 1.01 9425.23 36.82 0.00 0.00 13524.62 6523.81 23116.33 00:11:48.181 =================================================================================================================== 00:11:48.181 Total : 9425.23 36.82 0.00 0.00 13524.62 6523.81 23116.33 00:11:48.181 00:11:48.181 Latency(us) 00:11:48.181 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:48.181 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:11:48.181 Nvme1n1 : 1.01 5782.95 22.59 0.00 0.00 22010.23 10724.07 47662.55 00:11:48.181 =================================================================================================================== 00:11:48.181 Total : 5782.95 22.59 0.00 0.00 22010.23 10724.07 47662.55 00:11:48.181 16:23:22 -- target/bdev_io_wait.sh@38 -- # wait 73467 00:11:48.437 16:23:22 -- target/bdev_io_wait.sh@39 -- # wait 73469 00:11:48.437 16:23:22 -- target/bdev_io_wait.sh@40 -- # wait 73471 00:11:48.437 16:23:22 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:48.437 16:23:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:48.437 16:23:22 -- common/autotest_common.sh@10 -- # set +x 00:11:48.437 16:23:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:48.437 16:23:22 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:11:48.437 16:23:22 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:11:48.437 16:23:22 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:48.437 16:23:22 -- nvmf/common.sh@117 -- # sync 00:11:48.695 16:23:22 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:48.695 16:23:22 -- nvmf/common.sh@120 -- # set +e 00:11:48.695 16:23:22 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:48.695 16:23:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:48.695 rmmod nvme_tcp 00:11:48.695 rmmod nvme_fabrics 00:11:48.695 rmmod nvme_keyring 00:11:48.695 16:23:22 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:48.695 16:23:22 -- nvmf/common.sh@124 -- # set -e 00:11:48.695 16:23:22 -- nvmf/common.sh@125 -- # return 0 00:11:48.695 16:23:22 -- nvmf/common.sh@478 -- # '[' -n 73412 ']' 00:11:48.695 16:23:22 -- nvmf/common.sh@479 -- # killprocess 73412 00:11:48.695 16:23:22 -- common/autotest_common.sh@936 -- # '[' -z 73412 ']' 00:11:48.695 16:23:22 -- common/autotest_common.sh@940 -- # kill -0 73412 00:11:48.695 16:23:22 -- common/autotest_common.sh@941 -- # uname 00:11:48.695 16:23:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:48.695 16:23:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73412 00:11:48.695 16:23:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:48.695 16:23:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:48.695 killing process with pid 73412 00:11:48.695 16:23:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73412' 00:11:48.695 16:23:22 -- common/autotest_common.sh@955 -- # kill 73412 00:11:48.695 16:23:22 -- common/autotest_common.sh@960 -- # wait 73412 00:11:48.954 16:23:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:48.954 16:23:22 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:48.954 16:23:22 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:48.954 16:23:22 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:48.954 16:23:22 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:48.954 16:23:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.954 16:23:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:48.954 16:23:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.954 16:23:22 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:48.954 00:11:48.954 real 0m4.175s 00:11:48.954 user 0m18.541s 00:11:48.954 sys 0m1.988s 00:11:48.954 16:23:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:48.954 ************************************ 00:11:48.954 END TEST nvmf_bdev_io_wait 00:11:48.954 ************************************ 00:11:48.954 16:23:22 -- common/autotest_common.sh@10 -- # set +x 00:11:48.954 16:23:22 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:48.954 16:23:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:48.954 16:23:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:48.954 16:23:22 -- common/autotest_common.sh@10 -- # set +x 00:11:48.954 ************************************ 00:11:48.954 START TEST nvmf_queue_depth 00:11:48.954 ************************************ 00:11:48.954 16:23:22 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:49.213 * Looking for test storage... 00:11:49.213 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:49.213 16:23:23 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:49.213 16:23:23 -- nvmf/common.sh@7 -- # uname -s 00:11:49.213 16:23:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:49.213 16:23:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:49.213 16:23:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:49.213 16:23:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:49.213 16:23:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:49.213 16:23:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:49.213 16:23:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:49.213 16:23:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:49.213 16:23:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:49.213 16:23:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:49.213 16:23:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:11:49.213 16:23:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:11:49.213 16:23:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:49.213 16:23:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:49.213 16:23:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:49.213 16:23:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:49.213 16:23:23 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:49.213 16:23:23 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:49.213 16:23:23 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:49.213 16:23:23 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:49.213 16:23:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.213 16:23:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.213 16:23:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.213 16:23:23 -- paths/export.sh@5 -- # export PATH 00:11:49.213 16:23:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.213 16:23:23 -- nvmf/common.sh@47 -- # : 0 00:11:49.213 16:23:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:49.213 16:23:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:49.213 16:23:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:49.213 16:23:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:49.213 16:23:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:49.213 16:23:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:49.213 16:23:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:49.213 16:23:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:49.213 16:23:23 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:11:49.213 16:23:23 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:11:49.213 16:23:23 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:49.213 16:23:23 -- target/queue_depth.sh@19 -- # nvmftestinit 00:11:49.213 16:23:23 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:49.213 16:23:23 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:49.213 16:23:23 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:49.213 16:23:23 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:49.213 16:23:23 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:49.213 16:23:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.213 16:23:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:49.213 16:23:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.213 16:23:23 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:11:49.213 16:23:23 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:11:49.213 16:23:23 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:11:49.213 16:23:23 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:11:49.213 16:23:23 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:11:49.213 16:23:23 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:11:49.213 16:23:23 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:49.213 16:23:23 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:49.213 16:23:23 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:49.213 16:23:23 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:49.213 16:23:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:49.213 16:23:23 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:49.213 16:23:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:49.213 16:23:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:49.213 16:23:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:49.213 16:23:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:49.213 16:23:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:49.213 16:23:23 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:49.213 16:23:23 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:49.213 16:23:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:49.213 Cannot find device "nvmf_tgt_br" 00:11:49.213 16:23:23 -- nvmf/common.sh@155 -- # true 00:11:49.213 16:23:23 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:49.213 Cannot find device "nvmf_tgt_br2" 00:11:49.213 16:23:23 -- nvmf/common.sh@156 -- # true 00:11:49.213 16:23:23 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:49.213 16:23:23 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:49.213 Cannot find device "nvmf_tgt_br" 00:11:49.213 16:23:23 -- nvmf/common.sh@158 -- # true 00:11:49.213 16:23:23 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:49.213 Cannot find device "nvmf_tgt_br2" 00:11:49.213 16:23:23 -- nvmf/common.sh@159 -- # true 00:11:49.213 16:23:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:49.213 16:23:23 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:49.213 16:23:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:49.472 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:49.472 16:23:23 -- nvmf/common.sh@162 -- # true 00:11:49.472 16:23:23 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:49.472 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:49.472 16:23:23 -- nvmf/common.sh@163 -- # true 00:11:49.472 16:23:23 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:49.472 16:23:23 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:49.472 16:23:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:49.472 16:23:23 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:49.472 16:23:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:49.472 16:23:23 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:49.472 16:23:23 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:49.472 16:23:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:49.472 16:23:23 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:49.472 16:23:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:49.472 16:23:23 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:49.472 16:23:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:49.472 16:23:23 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:49.472 16:23:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:49.472 16:23:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:49.472 16:23:23 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:49.472 16:23:23 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:49.472 16:23:23 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:49.472 16:23:23 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:49.472 16:23:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:49.472 16:23:23 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:49.472 16:23:23 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:49.472 16:23:23 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:49.472 16:23:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:49.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:49.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:11:49.472 00:11:49.472 --- 10.0.0.2 ping statistics --- 00:11:49.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.472 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:11:49.473 16:23:23 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:49.473 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:49.473 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:11:49.473 00:11:49.473 --- 10.0.0.3 ping statistics --- 00:11:49.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.473 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:11:49.473 16:23:23 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:49.473 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:49.473 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:11:49.473 00:11:49.473 --- 10.0.0.1 ping statistics --- 00:11:49.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.473 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:11:49.473 16:23:23 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:49.473 16:23:23 -- nvmf/common.sh@422 -- # return 0 00:11:49.473 16:23:23 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:49.473 16:23:23 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:49.473 16:23:23 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:49.473 16:23:23 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:49.473 16:23:23 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:49.473 16:23:23 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:49.473 16:23:23 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:49.473 16:23:23 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:11:49.473 16:23:23 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:49.473 16:23:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:49.473 16:23:23 -- common/autotest_common.sh@10 -- # set +x 00:11:49.473 16:23:23 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:49.473 16:23:23 -- nvmf/common.sh@470 -- # nvmfpid=73708 00:11:49.473 16:23:23 -- nvmf/common.sh@471 -- # waitforlisten 73708 00:11:49.473 16:23:23 -- common/autotest_common.sh@817 -- # '[' -z 73708 ']' 00:11:49.473 16:23:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.473 16:23:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:49.473 16:23:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.473 16:23:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:49.473 16:23:23 -- common/autotest_common.sh@10 -- # set +x 00:11:49.732 [2024-04-17 16:23:23.549503] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:11:49.732 [2024-04-17 16:23:23.549593] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:49.732 [2024-04-17 16:23:23.684099] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.990 [2024-04-17 16:23:23.800362] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:49.990 [2024-04-17 16:23:23.800428] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:49.990 [2024-04-17 16:23:23.800441] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:49.990 [2024-04-17 16:23:23.800449] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:49.990 [2024-04-17 16:23:23.800457] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:49.990 [2024-04-17 16:23:23.800485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:50.558 16:23:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:50.558 16:23:24 -- common/autotest_common.sh@850 -- # return 0 00:11:50.558 16:23:24 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:50.558 16:23:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:50.558 16:23:24 -- common/autotest_common.sh@10 -- # set +x 00:11:50.558 16:23:24 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:50.558 16:23:24 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:50.558 16:23:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:50.558 16:23:24 -- common/autotest_common.sh@10 -- # set +x 00:11:50.558 [2024-04-17 16:23:24.529445] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:50.558 16:23:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:50.558 16:23:24 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:50.558 16:23:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:50.558 16:23:24 -- common/autotest_common.sh@10 -- # set +x 00:11:50.558 Malloc0 00:11:50.558 16:23:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:50.558 16:23:24 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:50.558 16:23:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:50.558 16:23:24 -- common/autotest_common.sh@10 -- # set +x 00:11:50.558 16:23:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:50.558 16:23:24 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:50.558 16:23:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:50.558 16:23:24 -- common/autotest_common.sh@10 -- # set +x 00:11:50.558 16:23:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:50.558 16:23:24 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:50.558 16:23:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:50.558 16:23:24 -- common/autotest_common.sh@10 -- # set +x 00:11:50.558 [2024-04-17 16:23:24.586768] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:50.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:50.558 16:23:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:50.558 16:23:24 -- target/queue_depth.sh@30 -- # bdevperf_pid=73758 00:11:50.558 16:23:24 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:11:50.558 16:23:24 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:50.558 16:23:24 -- target/queue_depth.sh@33 -- # waitforlisten 73758 /var/tmp/bdevperf.sock 00:11:50.558 16:23:24 -- common/autotest_common.sh@817 -- # '[' -z 73758 ']' 00:11:50.558 16:23:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:50.558 16:23:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:50.558 16:23:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:50.558 16:23:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:50.558 16:23:24 -- common/autotest_common.sh@10 -- # set +x 00:11:50.817 [2024-04-17 16:23:24.661828] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:11:50.817 [2024-04-17 16:23:24.662157] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73758 ] 00:11:50.817 [2024-04-17 16:23:24.800348] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.076 [2024-04-17 16:23:24.931840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.012 16:23:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:52.012 16:23:25 -- common/autotest_common.sh@850 -- # return 0 00:11:52.012 16:23:25 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:11:52.012 16:23:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:52.012 16:23:25 -- common/autotest_common.sh@10 -- # set +x 00:11:52.012 NVMe0n1 00:11:52.012 16:23:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:52.012 16:23:25 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:52.012 Running I/O for 10 seconds... 00:12:01.986 00:12:01.986 Latency(us) 00:12:01.986 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:01.986 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:01.986 Verification LBA range: start 0x0 length 0x4000 00:12:01.986 NVMe0n1 : 10.09 8210.40 32.07 0.00 0.00 124125.84 28835.84 94848.47 00:12:01.986 =================================================================================================================== 00:12:01.986 Total : 8210.40 32.07 0.00 0.00 124125.84 28835.84 94848.47 00:12:01.986 0 00:12:01.986 16:23:36 -- target/queue_depth.sh@39 -- # killprocess 73758 00:12:01.986 16:23:36 -- common/autotest_common.sh@936 -- # '[' -z 73758 ']' 00:12:01.986 16:23:36 -- common/autotest_common.sh@940 -- # kill -0 73758 00:12:01.986 16:23:36 -- common/autotest_common.sh@941 -- # uname 00:12:01.986 16:23:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:01.986 16:23:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73758 00:12:02.245 killing process with pid 73758 00:12:02.245 Received shutdown signal, test time was about 10.000000 seconds 00:12:02.245 00:12:02.245 Latency(us) 00:12:02.245 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:02.245 =================================================================================================================== 00:12:02.245 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:02.245 16:23:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:02.245 16:23:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:02.245 16:23:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73758' 00:12:02.245 16:23:36 -- common/autotest_common.sh@955 -- # kill 73758 00:12:02.245 16:23:36 -- common/autotest_common.sh@960 -- # wait 73758 00:12:02.504 16:23:36 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:02.504 16:23:36 -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:02.504 16:23:36 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:02.504 16:23:36 -- nvmf/common.sh@117 -- # sync 00:12:02.504 16:23:36 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:02.504 16:23:36 -- nvmf/common.sh@120 -- # set +e 00:12:02.504 16:23:36 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:02.504 16:23:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:02.504 rmmod nvme_tcp 00:12:02.504 rmmod nvme_fabrics 00:12:02.504 rmmod nvme_keyring 00:12:02.504 16:23:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:02.504 16:23:36 -- nvmf/common.sh@124 -- # set -e 00:12:02.504 16:23:36 -- nvmf/common.sh@125 -- # return 0 00:12:02.504 16:23:36 -- nvmf/common.sh@478 -- # '[' -n 73708 ']' 00:12:02.504 16:23:36 -- nvmf/common.sh@479 -- # killprocess 73708 00:12:02.504 16:23:36 -- common/autotest_common.sh@936 -- # '[' -z 73708 ']' 00:12:02.504 16:23:36 -- common/autotest_common.sh@940 -- # kill -0 73708 00:12:02.504 16:23:36 -- common/autotest_common.sh@941 -- # uname 00:12:02.504 16:23:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:02.504 16:23:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73708 00:12:02.504 killing process with pid 73708 00:12:02.504 16:23:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:02.504 16:23:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:02.504 16:23:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73708' 00:12:02.504 16:23:36 -- common/autotest_common.sh@955 -- # kill 73708 00:12:02.504 16:23:36 -- common/autotest_common.sh@960 -- # wait 73708 00:12:02.763 16:23:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:02.763 16:23:36 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:02.763 16:23:36 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:02.763 16:23:36 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:02.763 16:23:36 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:02.763 16:23:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.763 16:23:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:02.763 16:23:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.763 16:23:36 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:02.763 00:12:02.763 real 0m13.794s 00:12:02.763 user 0m23.559s 00:12:02.763 sys 0m2.214s 00:12:02.763 ************************************ 00:12:02.763 END TEST nvmf_queue_depth 00:12:02.763 ************************************ 00:12:02.763 16:23:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:02.763 16:23:36 -- common/autotest_common.sh@10 -- # set +x 00:12:03.023 16:23:36 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:03.023 16:23:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:03.023 16:23:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:03.023 16:23:36 -- common/autotest_common.sh@10 -- # set +x 00:12:03.023 ************************************ 00:12:03.023 START TEST nvmf_multipath 00:12:03.023 ************************************ 00:12:03.023 16:23:36 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:03.023 * Looking for test storage... 00:12:03.023 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:03.023 16:23:36 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:03.023 16:23:36 -- nvmf/common.sh@7 -- # uname -s 00:12:03.023 16:23:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:03.023 16:23:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:03.023 16:23:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:03.023 16:23:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:03.023 16:23:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:03.023 16:23:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:03.023 16:23:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:03.023 16:23:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:03.023 16:23:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:03.023 16:23:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:03.023 16:23:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:12:03.023 16:23:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:12:03.023 16:23:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:03.023 16:23:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:03.023 16:23:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:03.023 16:23:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:03.023 16:23:37 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:03.023 16:23:37 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:03.023 16:23:37 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:03.023 16:23:37 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:03.023 16:23:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.023 16:23:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.023 16:23:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.023 16:23:37 -- paths/export.sh@5 -- # export PATH 00:12:03.023 16:23:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.023 16:23:37 -- nvmf/common.sh@47 -- # : 0 00:12:03.023 16:23:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:03.023 16:23:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:03.023 16:23:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:03.023 16:23:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:03.023 16:23:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:03.023 16:23:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:03.023 16:23:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:03.023 16:23:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:03.023 16:23:37 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:03.023 16:23:37 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:03.023 16:23:37 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:12:03.023 16:23:37 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:03.023 16:23:37 -- target/multipath.sh@43 -- # nvmftestinit 00:12:03.023 16:23:37 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:03.023 16:23:37 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:03.023 16:23:37 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:03.023 16:23:37 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:03.023 16:23:37 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:03.023 16:23:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.023 16:23:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:03.023 16:23:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.023 16:23:37 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:12:03.023 16:23:37 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:12:03.023 16:23:37 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:12:03.023 16:23:37 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:12:03.023 16:23:37 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:12:03.023 16:23:37 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:12:03.023 16:23:37 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:03.023 16:23:37 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:03.023 16:23:37 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:03.023 16:23:37 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:03.023 16:23:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:03.023 16:23:37 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:03.023 16:23:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:03.023 16:23:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:03.023 16:23:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:03.023 16:23:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:03.023 16:23:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:03.023 16:23:37 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:03.023 16:23:37 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:03.023 16:23:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:03.023 Cannot find device "nvmf_tgt_br" 00:12:03.023 16:23:37 -- nvmf/common.sh@155 -- # true 00:12:03.023 16:23:37 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:03.023 Cannot find device "nvmf_tgt_br2" 00:12:03.282 16:23:37 -- nvmf/common.sh@156 -- # true 00:12:03.282 16:23:37 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:03.282 16:23:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:03.282 Cannot find device "nvmf_tgt_br" 00:12:03.282 16:23:37 -- nvmf/common.sh@158 -- # true 00:12:03.282 16:23:37 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:03.282 Cannot find device "nvmf_tgt_br2" 00:12:03.282 16:23:37 -- nvmf/common.sh@159 -- # true 00:12:03.282 16:23:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:03.282 16:23:37 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:03.282 16:23:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:03.282 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:03.282 16:23:37 -- nvmf/common.sh@162 -- # true 00:12:03.282 16:23:37 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:03.282 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:03.282 16:23:37 -- nvmf/common.sh@163 -- # true 00:12:03.282 16:23:37 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:03.282 16:23:37 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:03.282 16:23:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:03.282 16:23:37 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:03.282 16:23:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:03.282 16:23:37 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:03.282 16:23:37 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:03.282 16:23:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:03.282 16:23:37 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:03.282 16:23:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:03.282 16:23:37 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:03.282 16:23:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:03.282 16:23:37 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:03.282 16:23:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:03.282 16:23:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:03.282 16:23:37 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:03.282 16:23:37 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:03.282 16:23:37 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:03.282 16:23:37 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:03.282 16:23:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:03.282 16:23:37 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:03.282 16:23:37 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:03.282 16:23:37 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:03.282 16:23:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:03.282 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:03.282 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:12:03.282 00:12:03.282 --- 10.0.0.2 ping statistics --- 00:12:03.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.282 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:12:03.282 16:23:37 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:03.282 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:03.282 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:12:03.282 00:12:03.282 --- 10.0.0.3 ping statistics --- 00:12:03.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.282 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:12:03.282 16:23:37 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:03.282 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:03.282 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:12:03.282 00:12:03.282 --- 10.0.0.1 ping statistics --- 00:12:03.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.283 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:12:03.283 16:23:37 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:03.283 16:23:37 -- nvmf/common.sh@422 -- # return 0 00:12:03.283 16:23:37 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:03.283 16:23:37 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:03.283 16:23:37 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:03.283 16:23:37 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:03.283 16:23:37 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:03.283 16:23:37 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:03.283 16:23:37 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:03.542 16:23:37 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:12:03.542 16:23:37 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:12:03.542 16:23:37 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:12:03.542 16:23:37 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:03.542 16:23:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:03.542 16:23:37 -- common/autotest_common.sh@10 -- # set +x 00:12:03.542 16:23:37 -- nvmf/common.sh@470 -- # nvmfpid=74095 00:12:03.542 16:23:37 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:03.542 16:23:37 -- nvmf/common.sh@471 -- # waitforlisten 74095 00:12:03.542 16:23:37 -- common/autotest_common.sh@817 -- # '[' -z 74095 ']' 00:12:03.542 16:23:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.542 16:23:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:03.542 16:23:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.542 16:23:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:03.542 16:23:37 -- common/autotest_common.sh@10 -- # set +x 00:12:03.542 [2024-04-17 16:23:37.392036] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:12:03.542 [2024-04-17 16:23:37.392129] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:03.542 [2024-04-17 16:23:37.530696] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:03.801 [2024-04-17 16:23:37.670457] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:03.801 [2024-04-17 16:23:37.670540] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:03.801 [2024-04-17 16:23:37.670555] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:03.801 [2024-04-17 16:23:37.670566] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:03.801 [2024-04-17 16:23:37.670576] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:03.801 [2024-04-17 16:23:37.670739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:03.801 [2024-04-17 16:23:37.671029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:03.801 [2024-04-17 16:23:37.671689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:03.801 [2024-04-17 16:23:37.671703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.369 16:23:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:04.369 16:23:38 -- common/autotest_common.sh@850 -- # return 0 00:12:04.369 16:23:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:04.369 16:23:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:04.369 16:23:38 -- common/autotest_common.sh@10 -- # set +x 00:12:04.369 16:23:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:04.369 16:23:38 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:04.627 [2024-04-17 16:23:38.668008] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:04.885 16:23:38 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:12:05.143 Malloc0 00:12:05.143 16:23:39 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:12:05.401 16:23:39 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:05.660 16:23:39 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:05.919 [2024-04-17 16:23:39.858453] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:05.919 16:23:39 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:06.177 [2024-04-17 16:23:40.122736] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:06.177 16:23:40 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d --hostid=35bbb10f-fc38-42ac-b909-033700c5e05d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:12:06.435 16:23:40 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d --hostid=35bbb10f-fc38-42ac-b909-033700c5e05d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:12:06.694 16:23:40 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:12:06.694 16:23:40 -- common/autotest_common.sh@1184 -- # local i=0 00:12:06.694 16:23:40 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:06.694 16:23:40 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:06.694 16:23:40 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:08.596 16:23:42 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:08.596 16:23:42 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:08.596 16:23:42 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:08.596 16:23:42 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:08.596 16:23:42 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:08.596 16:23:42 -- common/autotest_common.sh@1194 -- # return 0 00:12:08.596 16:23:42 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:12:08.596 16:23:42 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:12:08.596 16:23:42 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:12:08.596 16:23:42 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:08.596 16:23:42 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:12:08.596 16:23:42 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:12:08.596 16:23:42 -- target/multipath.sh@38 -- # return 0 00:12:08.596 16:23:42 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:12:08.596 16:23:42 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:12:08.596 16:23:42 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:12:08.596 16:23:42 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:12:08.596 16:23:42 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:12:08.596 16:23:42 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:12:08.596 16:23:42 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:12:08.596 16:23:42 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:12:08.596 16:23:42 -- target/multipath.sh@22 -- # local timeout=20 00:12:08.596 16:23:42 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:08.596 16:23:42 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:08.596 16:23:42 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:12:08.596 16:23:42 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:12:08.596 16:23:42 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:12:08.596 16:23:42 -- target/multipath.sh@22 -- # local timeout=20 00:12:08.596 16:23:42 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:08.596 16:23:42 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:08.596 16:23:42 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:12:08.596 16:23:42 -- target/multipath.sh@85 -- # echo numa 00:12:08.596 16:23:42 -- target/multipath.sh@88 -- # fio_pid=74243 00:12:08.596 16:23:42 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:12:08.596 16:23:42 -- target/multipath.sh@90 -- # sleep 1 00:12:08.596 [global] 00:12:08.596 thread=1 00:12:08.596 invalidate=1 00:12:08.596 rw=randrw 00:12:08.596 time_based=1 00:12:08.596 runtime=6 00:12:08.596 ioengine=libaio 00:12:08.596 direct=1 00:12:08.596 bs=4096 00:12:08.596 iodepth=128 00:12:08.596 norandommap=0 00:12:08.596 numjobs=1 00:12:08.596 00:12:08.596 verify_dump=1 00:12:08.596 verify_backlog=512 00:12:08.596 verify_state_save=0 00:12:08.596 do_verify=1 00:12:08.596 verify=crc32c-intel 00:12:08.596 [job0] 00:12:08.596 filename=/dev/nvme0n1 00:12:08.596 Could not set queue depth (nvme0n1) 00:12:08.855 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:08.855 fio-3.35 00:12:08.855 Starting 1 thread 00:12:09.790 16:23:43 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:12:10.047 16:23:43 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:12:10.306 16:23:44 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:12:10.306 16:23:44 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:12:10.306 16:23:44 -- target/multipath.sh@22 -- # local timeout=20 00:12:10.306 16:23:44 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:10.306 16:23:44 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:10.306 16:23:44 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:12:10.306 16:23:44 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:12:10.306 16:23:44 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:12:10.306 16:23:44 -- target/multipath.sh@22 -- # local timeout=20 00:12:10.306 16:23:44 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:10.306 16:23:44 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:10.306 16:23:44 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:12:10.306 16:23:44 -- target/multipath.sh@25 -- # sleep 1s 00:12:11.241 16:23:45 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:12:11.241 16:23:45 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:11.241 16:23:45 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:12:11.241 16:23:45 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:12:11.499 16:23:45 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:12:11.757 16:23:45 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:12:11.757 16:23:45 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:12:11.757 16:23:45 -- target/multipath.sh@22 -- # local timeout=20 00:12:11.757 16:23:45 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:11.757 16:23:45 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:11.757 16:23:45 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:12:11.757 16:23:45 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:12:11.757 16:23:45 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:12:11.757 16:23:45 -- target/multipath.sh@22 -- # local timeout=20 00:12:11.757 16:23:45 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:11.757 16:23:45 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:11.757 16:23:45 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:12:11.757 16:23:45 -- target/multipath.sh@25 -- # sleep 1s 00:12:12.691 16:23:46 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:12:12.691 16:23:46 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:12.691 16:23:46 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:12:12.691 16:23:46 -- target/multipath.sh@104 -- # wait 74243 00:12:15.220 00:12:15.220 job0: (groupid=0, jobs=1): err= 0: pid=74265: Wed Apr 17 16:23:48 2024 00:12:15.220 read: IOPS=10.0k, BW=39.2MiB/s (41.1MB/s)(236MiB/6004msec) 00:12:15.220 slat (usec): min=4, max=5716, avg=57.95, stdev=265.22 00:12:15.220 clat (usec): min=1403, max=31493, avg=8717.28, stdev=1775.86 00:12:15.220 lat (usec): min=1459, max=31527, avg=8775.24, stdev=1791.73 00:12:15.220 clat percentiles (usec): 00:12:15.220 | 1.00th=[ 5080], 5.00th=[ 6587], 10.00th=[ 7308], 20.00th=[ 7701], 00:12:15.220 | 30.00th=[ 7963], 40.00th=[ 8160], 50.00th=[ 8455], 60.00th=[ 8848], 00:12:15.220 | 70.00th=[ 9110], 80.00th=[ 9634], 90.00th=[10683], 95.00th=[11469], 00:12:15.220 | 99.00th=[12911], 99.50th=[14091], 99.90th=[27132], 99.95th=[28967], 00:12:15.220 | 99.99th=[30278] 00:12:15.220 bw ( KiB/s): min= 1712, max=27312, per=51.67%, avg=20756.36, stdev=8361.24, samples=11 00:12:15.220 iops : min= 428, max= 6828, avg=5189.09, stdev=2090.31, samples=11 00:12:15.220 write: IOPS=6111, BW=23.9MiB/s (25.0MB/s)(124MiB/5194msec); 0 zone resets 00:12:15.220 slat (usec): min=11, max=5058, avg=68.37, stdev=181.43 00:12:15.220 clat (usec): min=781, max=26337, avg=7424.90, stdev=1670.14 00:12:15.220 lat (usec): min=863, max=26892, avg=7493.28, stdev=1682.14 00:12:15.220 clat percentiles (usec): 00:12:15.220 | 1.00th=[ 4047], 5.00th=[ 5407], 10.00th=[ 6259], 20.00th=[ 6718], 00:12:15.220 | 30.00th=[ 6980], 40.00th=[ 7177], 50.00th=[ 7373], 60.00th=[ 7570], 00:12:15.220 | 70.00th=[ 7767], 80.00th=[ 8029], 90.00th=[ 8356], 95.00th=[ 8717], 00:12:15.220 | 99.00th=[12256], 99.50th=[21365], 99.90th=[24511], 99.95th=[25297], 00:12:15.220 | 99.99th=[26084] 00:12:15.220 bw ( KiB/s): min= 1872, max=28312, per=85.29%, avg=20851.64, stdev=8374.37, samples=11 00:12:15.220 iops : min= 468, max= 7078, avg=5212.91, stdev=2093.59, samples=11 00:12:15.220 lat (usec) : 1000=0.01% 00:12:15.220 lat (msec) : 2=0.02%, 4=0.40%, 10=89.43%, 20=9.71%, 50=0.43% 00:12:15.220 cpu : usr=5.33%, sys=21.84%, ctx=5885, majf=0, minf=108 00:12:15.220 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:12:15.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:15.220 issued rwts: total=60293,31744,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:15.220 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:15.220 00:12:15.220 Run status group 0 (all jobs): 00:12:15.220 READ: bw=39.2MiB/s (41.1MB/s), 39.2MiB/s-39.2MiB/s (41.1MB/s-41.1MB/s), io=236MiB (247MB), run=6004-6004msec 00:12:15.220 WRITE: bw=23.9MiB/s (25.0MB/s), 23.9MiB/s-23.9MiB/s (25.0MB/s-25.0MB/s), io=124MiB (130MB), run=5194-5194msec 00:12:15.220 00:12:15.220 Disk stats (read/write): 00:12:15.220 nvme0n1: ios=59482/31167, merge=0/0, ticks=488160/216123, in_queue=704283, util=98.65% 00:12:15.220 16:23:48 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:12:15.220 16:23:49 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:12:15.479 16:23:49 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:12:15.479 16:23:49 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:12:15.479 16:23:49 -- target/multipath.sh@22 -- # local timeout=20 00:12:15.479 16:23:49 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:15.479 16:23:49 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:15.479 16:23:49 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:12:15.479 16:23:49 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:12:15.479 16:23:49 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:12:15.479 16:23:49 -- target/multipath.sh@22 -- # local timeout=20 00:12:15.479 16:23:49 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:15.479 16:23:49 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:15.479 16:23:49 -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:12:15.479 16:23:49 -- target/multipath.sh@25 -- # sleep 1s 00:12:16.870 16:23:50 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:12:16.870 16:23:50 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:16.870 16:23:50 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:12:16.870 16:23:50 -- target/multipath.sh@113 -- # echo round-robin 00:12:16.870 16:23:50 -- target/multipath.sh@116 -- # fio_pid=74391 00:12:16.870 16:23:50 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:12:16.870 16:23:50 -- target/multipath.sh@118 -- # sleep 1 00:12:16.870 [global] 00:12:16.870 thread=1 00:12:16.870 invalidate=1 00:12:16.870 rw=randrw 00:12:16.870 time_based=1 00:12:16.870 runtime=6 00:12:16.870 ioengine=libaio 00:12:16.870 direct=1 00:12:16.870 bs=4096 00:12:16.870 iodepth=128 00:12:16.870 norandommap=0 00:12:16.870 numjobs=1 00:12:16.870 00:12:16.870 verify_dump=1 00:12:16.870 verify_backlog=512 00:12:16.870 verify_state_save=0 00:12:16.870 do_verify=1 00:12:16.870 verify=crc32c-intel 00:12:16.870 [job0] 00:12:16.870 filename=/dev/nvme0n1 00:12:16.870 Could not set queue depth (nvme0n1) 00:12:16.870 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:16.870 fio-3.35 00:12:16.870 Starting 1 thread 00:12:17.805 16:23:51 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:12:17.805 16:23:51 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:12:18.063 16:23:52 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:12:18.063 16:23:52 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:12:18.063 16:23:52 -- target/multipath.sh@22 -- # local timeout=20 00:12:18.063 16:23:52 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:18.063 16:23:52 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:18.063 16:23:52 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:12:18.063 16:23:52 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:12:18.063 16:23:52 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:12:18.063 16:23:52 -- target/multipath.sh@22 -- # local timeout=20 00:12:18.063 16:23:52 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:18.063 16:23:52 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:18.063 16:23:52 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:12:18.063 16:23:52 -- target/multipath.sh@25 -- # sleep 1s 00:12:19.438 16:23:53 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:12:19.438 16:23:53 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:19.438 16:23:53 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:12:19.438 16:23:53 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:12:19.438 16:23:53 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:12:19.697 16:23:53 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:12:19.697 16:23:53 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:12:19.697 16:23:53 -- target/multipath.sh@22 -- # local timeout=20 00:12:19.697 16:23:53 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:19.697 16:23:53 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:19.697 16:23:53 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:12:19.697 16:23:53 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:12:19.697 16:23:53 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:12:19.697 16:23:53 -- target/multipath.sh@22 -- # local timeout=20 00:12:19.697 16:23:53 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:19.697 16:23:53 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:19.697 16:23:53 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:12:19.697 16:23:53 -- target/multipath.sh@25 -- # sleep 1s 00:12:21.091 16:23:54 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:12:21.091 16:23:54 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:21.091 16:23:54 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:12:21.091 16:23:54 -- target/multipath.sh@132 -- # wait 74391 00:12:22.992 00:12:22.992 job0: (groupid=0, jobs=1): err= 0: pid=74412: Wed Apr 17 16:23:56 2024 00:12:22.992 read: IOPS=10.7k, BW=41.7MiB/s (43.7MB/s)(250MiB/6006msec) 00:12:22.992 slat (usec): min=4, max=5547, avg=47.44, stdev=229.84 00:12:22.992 clat (usec): min=349, max=20532, avg=8284.55, stdev=2081.84 00:12:22.992 lat (usec): min=376, max=20546, avg=8331.99, stdev=2095.56 00:12:22.992 clat percentiles (usec): 00:12:22.992 | 1.00th=[ 2737], 5.00th=[ 4621], 10.00th=[ 5538], 20.00th=[ 6849], 00:12:22.992 | 30.00th=[ 7701], 40.00th=[ 8029], 50.00th=[ 8455], 60.00th=[ 8848], 00:12:22.992 | 70.00th=[ 9110], 80.00th=[ 9634], 90.00th=[10421], 95.00th=[11600], 00:12:22.992 | 99.00th=[13960], 99.50th=[14746], 99.90th=[17171], 99.95th=[17695], 00:12:22.992 | 99.99th=[20055] 00:12:22.992 bw ( KiB/s): min=12296, max=33189, per=52.21%, avg=22283.36, stdev=6740.36, samples=11 00:12:22.992 iops : min= 3074, max= 8297, avg=5570.82, stdev=1685.05, samples=11 00:12:22.992 write: IOPS=6204, BW=24.2MiB/s (25.4MB/s)(131MiB/5401msec); 0 zone resets 00:12:22.992 slat (usec): min=12, max=2157, avg=59.40, stdev=150.43 00:12:22.992 clat (usec): min=526, max=17262, avg=6904.25, stdev=1782.76 00:12:22.992 lat (usec): min=584, max=17287, avg=6963.65, stdev=1793.34 00:12:22.992 clat percentiles (usec): 00:12:22.992 | 1.00th=[ 2769], 5.00th=[ 3654], 10.00th=[ 4228], 20.00th=[ 5211], 00:12:22.992 | 30.00th=[ 6456], 40.00th=[ 6915], 50.00th=[ 7242], 60.00th=[ 7504], 00:12:22.992 | 70.00th=[ 7832], 80.00th=[ 8094], 90.00th=[ 8586], 95.00th=[ 9241], 00:12:22.992 | 99.00th=[11600], 99.50th=[12256], 99.90th=[13960], 99.95th=[14615], 00:12:22.992 | 99.99th=[17171] 00:12:22.992 bw ( KiB/s): min=12568, max=32702, per=89.82%, avg=22293.45, stdev=6497.79, samples=11 00:12:22.992 iops : min= 3142, max= 8175, avg=5573.27, stdev=1624.32, samples=11 00:12:22.992 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.03% 00:12:22.992 lat (msec) : 2=0.31%, 4=4.44%, 10=85.03%, 20=10.16%, 50=0.01% 00:12:22.992 cpu : usr=6.24%, sys=23.26%, ctx=6696, majf=0, minf=84 00:12:22.992 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:12:22.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:22.992 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:22.992 issued rwts: total=64083,33511,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:22.992 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:22.992 00:12:22.992 Run status group 0 (all jobs): 00:12:22.992 READ: bw=41.7MiB/s (43.7MB/s), 41.7MiB/s-41.7MiB/s (43.7MB/s-43.7MB/s), io=250MiB (262MB), run=6006-6006msec 00:12:22.992 WRITE: bw=24.2MiB/s (25.4MB/s), 24.2MiB/s-24.2MiB/s (25.4MB/s-25.4MB/s), io=131MiB (137MB), run=5401-5401msec 00:12:22.992 00:12:22.992 Disk stats (read/write): 00:12:22.992 nvme0n1: ios=63350/32718, merge=0/0, ticks=491895/209101, in_queue=700996, util=98.70% 00:12:22.992 16:23:56 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:22.992 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:22.992 16:23:56 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:22.992 16:23:56 -- common/autotest_common.sh@1205 -- # local i=0 00:12:22.992 16:23:56 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:12:22.992 16:23:56 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:22.992 16:23:56 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:12:22.992 16:23:56 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:22.992 16:23:56 -- common/autotest_common.sh@1217 -- # return 0 00:12:22.992 16:23:56 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:23.250 16:23:57 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:12:23.250 16:23:57 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:12:23.250 16:23:57 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:12:23.250 16:23:57 -- target/multipath.sh@144 -- # nvmftestfini 00:12:23.250 16:23:57 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:23.250 16:23:57 -- nvmf/common.sh@117 -- # sync 00:12:23.250 16:23:57 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:23.250 16:23:57 -- nvmf/common.sh@120 -- # set +e 00:12:23.250 16:23:57 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:23.250 16:23:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:23.250 rmmod nvme_tcp 00:12:23.250 rmmod nvme_fabrics 00:12:23.250 rmmod nvme_keyring 00:12:23.250 16:23:57 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:23.250 16:23:57 -- nvmf/common.sh@124 -- # set -e 00:12:23.250 16:23:57 -- nvmf/common.sh@125 -- # return 0 00:12:23.250 16:23:57 -- nvmf/common.sh@478 -- # '[' -n 74095 ']' 00:12:23.250 16:23:57 -- nvmf/common.sh@479 -- # killprocess 74095 00:12:23.250 16:23:57 -- common/autotest_common.sh@936 -- # '[' -z 74095 ']' 00:12:23.250 16:23:57 -- common/autotest_common.sh@940 -- # kill -0 74095 00:12:23.250 16:23:57 -- common/autotest_common.sh@941 -- # uname 00:12:23.250 16:23:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:23.250 16:23:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74095 00:12:23.250 killing process with pid 74095 00:12:23.250 16:23:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:23.250 16:23:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:23.250 16:23:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74095' 00:12:23.250 16:23:57 -- common/autotest_common.sh@955 -- # kill 74095 00:12:23.250 16:23:57 -- common/autotest_common.sh@960 -- # wait 74095 00:12:23.817 16:23:57 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:23.817 16:23:57 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:23.817 16:23:57 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:23.817 16:23:57 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:23.817 16:23:57 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:23.817 16:23:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.817 16:23:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:23.817 16:23:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.817 16:23:57 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:23.817 ************************************ 00:12:23.817 END TEST nvmf_multipath 00:12:23.817 ************************************ 00:12:23.817 00:12:23.817 real 0m20.697s 00:12:23.817 user 1m21.661s 00:12:23.817 sys 0m6.169s 00:12:23.817 16:23:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:23.817 16:23:57 -- common/autotest_common.sh@10 -- # set +x 00:12:23.817 16:23:57 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:23.817 16:23:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:23.817 16:23:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:23.817 16:23:57 -- common/autotest_common.sh@10 -- # set +x 00:12:23.817 ************************************ 00:12:23.817 START TEST nvmf_zcopy 00:12:23.817 ************************************ 00:12:23.817 16:23:57 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:23.817 * Looking for test storage... 00:12:23.817 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:23.817 16:23:57 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:23.817 16:23:57 -- nvmf/common.sh@7 -- # uname -s 00:12:23.817 16:23:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:23.817 16:23:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:23.817 16:23:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:23.817 16:23:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:23.817 16:23:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:23.817 16:23:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:23.817 16:23:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:23.817 16:23:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:23.817 16:23:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:23.817 16:23:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:23.817 16:23:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:12:23.817 16:23:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:12:23.817 16:23:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:23.817 16:23:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:23.817 16:23:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:23.817 16:23:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:23.817 16:23:57 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:23.817 16:23:57 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:23.817 16:23:57 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:23.817 16:23:57 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:23.817 16:23:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.817 16:23:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.817 16:23:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.817 16:23:57 -- paths/export.sh@5 -- # export PATH 00:12:23.817 16:23:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.817 16:23:57 -- nvmf/common.sh@47 -- # : 0 00:12:23.817 16:23:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:23.817 16:23:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:23.817 16:23:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:23.817 16:23:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:23.817 16:23:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:23.817 16:23:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:23.817 16:23:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:23.817 16:23:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:23.817 16:23:57 -- target/zcopy.sh@12 -- # nvmftestinit 00:12:23.817 16:23:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:23.817 16:23:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:23.817 16:23:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:23.817 16:23:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:23.817 16:23:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:23.817 16:23:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.817 16:23:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:23.817 16:23:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.817 16:23:57 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:12:23.817 16:23:57 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:12:23.817 16:23:57 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:12:23.817 16:23:57 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:12:23.817 16:23:57 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:12:23.817 16:23:57 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:12:23.817 16:23:57 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:23.817 16:23:57 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:23.817 16:23:57 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:23.817 16:23:57 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:23.817 16:23:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:23.817 16:23:57 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:23.817 16:23:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:23.817 16:23:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:23.818 16:23:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:23.818 16:23:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:23.818 16:23:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:23.818 16:23:57 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:23.818 16:23:57 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:23.818 16:23:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:24.077 Cannot find device "nvmf_tgt_br" 00:12:24.077 16:23:57 -- nvmf/common.sh@155 -- # true 00:12:24.077 16:23:57 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:24.077 Cannot find device "nvmf_tgt_br2" 00:12:24.077 16:23:57 -- nvmf/common.sh@156 -- # true 00:12:24.077 16:23:57 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:24.077 16:23:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:24.077 Cannot find device "nvmf_tgt_br" 00:12:24.077 16:23:57 -- nvmf/common.sh@158 -- # true 00:12:24.077 16:23:57 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:24.077 Cannot find device "nvmf_tgt_br2" 00:12:24.077 16:23:57 -- nvmf/common.sh@159 -- # true 00:12:24.077 16:23:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:24.077 16:23:57 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:24.077 16:23:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:24.077 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:24.077 16:23:57 -- nvmf/common.sh@162 -- # true 00:12:24.077 16:23:57 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:24.077 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:24.077 16:23:57 -- nvmf/common.sh@163 -- # true 00:12:24.077 16:23:57 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:24.077 16:23:57 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:24.077 16:23:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:24.077 16:23:57 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:24.077 16:23:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:24.077 16:23:58 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:24.077 16:23:58 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:24.077 16:23:58 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:24.077 16:23:58 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:24.077 16:23:58 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:24.077 16:23:58 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:24.077 16:23:58 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:24.077 16:23:58 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:24.077 16:23:58 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:24.077 16:23:58 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:24.077 16:23:58 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:24.077 16:23:58 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:24.077 16:23:58 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:24.077 16:23:58 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:24.077 16:23:58 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:24.335 16:23:58 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:24.335 16:23:58 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:24.335 16:23:58 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:24.335 16:23:58 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:24.335 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:24.335 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:12:24.335 00:12:24.335 --- 10.0.0.2 ping statistics --- 00:12:24.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.335 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:12:24.335 16:23:58 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:24.335 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:24.335 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:12:24.335 00:12:24.335 --- 10.0.0.3 ping statistics --- 00:12:24.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.335 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:12:24.335 16:23:58 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:24.335 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:24.335 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:12:24.335 00:12:24.335 --- 10.0.0.1 ping statistics --- 00:12:24.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.335 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:12:24.335 16:23:58 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:24.335 16:23:58 -- nvmf/common.sh@422 -- # return 0 00:12:24.335 16:23:58 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:24.335 16:23:58 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:24.335 16:23:58 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:24.335 16:23:58 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:24.335 16:23:58 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:24.335 16:23:58 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:24.335 16:23:58 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:24.335 16:23:58 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:12:24.335 16:23:58 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:24.335 16:23:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:24.335 16:23:58 -- common/autotest_common.sh@10 -- # set +x 00:12:24.335 16:23:58 -- nvmf/common.sh@470 -- # nvmfpid=74703 00:12:24.335 16:23:58 -- nvmf/common.sh@471 -- # waitforlisten 74703 00:12:24.335 16:23:58 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:24.335 16:23:58 -- common/autotest_common.sh@817 -- # '[' -z 74703 ']' 00:12:24.335 16:23:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.336 16:23:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:24.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.336 16:23:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.336 16:23:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:24.336 16:23:58 -- common/autotest_common.sh@10 -- # set +x 00:12:24.336 [2024-04-17 16:23:58.255531] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:12:24.336 [2024-04-17 16:23:58.255667] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:24.594 [2024-04-17 16:23:58.399729] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.594 [2024-04-17 16:23:58.527682] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:24.594 [2024-04-17 16:23:58.527748] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:24.594 [2024-04-17 16:23:58.527763] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:24.594 [2024-04-17 16:23:58.527788] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:24.594 [2024-04-17 16:23:58.527798] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:24.594 [2024-04-17 16:23:58.527837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:25.528 16:23:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:25.528 16:23:59 -- common/autotest_common.sh@850 -- # return 0 00:12:25.528 16:23:59 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:25.528 16:23:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:25.528 16:23:59 -- common/autotest_common.sh@10 -- # set +x 00:12:25.528 16:23:59 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:25.528 16:23:59 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:12:25.528 16:23:59 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:12:25.528 16:23:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:25.528 16:23:59 -- common/autotest_common.sh@10 -- # set +x 00:12:25.528 [2024-04-17 16:23:59.373807] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:25.528 16:23:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:25.528 16:23:59 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:25.528 16:23:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:25.528 16:23:59 -- common/autotest_common.sh@10 -- # set +x 00:12:25.528 16:23:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:25.528 16:23:59 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:25.528 16:23:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:25.528 16:23:59 -- common/autotest_common.sh@10 -- # set +x 00:12:25.528 [2024-04-17 16:23:59.389935] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:25.528 16:23:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:25.528 16:23:59 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:25.528 16:23:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:25.528 16:23:59 -- common/autotest_common.sh@10 -- # set +x 00:12:25.528 16:23:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:25.528 16:23:59 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:12:25.528 16:23:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:25.528 16:23:59 -- common/autotest_common.sh@10 -- # set +x 00:12:25.528 malloc0 00:12:25.528 16:23:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:25.528 16:23:59 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:25.528 16:23:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:25.528 16:23:59 -- common/autotest_common.sh@10 -- # set +x 00:12:25.528 16:23:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:25.528 16:23:59 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:12:25.528 16:23:59 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:12:25.528 16:23:59 -- nvmf/common.sh@521 -- # config=() 00:12:25.528 16:23:59 -- nvmf/common.sh@521 -- # local subsystem config 00:12:25.528 16:23:59 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:25.528 16:23:59 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:25.528 { 00:12:25.528 "params": { 00:12:25.528 "name": "Nvme$subsystem", 00:12:25.528 "trtype": "$TEST_TRANSPORT", 00:12:25.528 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:25.528 "adrfam": "ipv4", 00:12:25.528 "trsvcid": "$NVMF_PORT", 00:12:25.528 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:25.528 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:25.528 "hdgst": ${hdgst:-false}, 00:12:25.528 "ddgst": ${ddgst:-false} 00:12:25.528 }, 00:12:25.528 "method": "bdev_nvme_attach_controller" 00:12:25.528 } 00:12:25.528 EOF 00:12:25.528 )") 00:12:25.528 16:23:59 -- nvmf/common.sh@543 -- # cat 00:12:25.528 16:23:59 -- nvmf/common.sh@545 -- # jq . 00:12:25.528 16:23:59 -- nvmf/common.sh@546 -- # IFS=, 00:12:25.528 16:23:59 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:25.528 "params": { 00:12:25.528 "name": "Nvme1", 00:12:25.528 "trtype": "tcp", 00:12:25.528 "traddr": "10.0.0.2", 00:12:25.528 "adrfam": "ipv4", 00:12:25.528 "trsvcid": "4420", 00:12:25.528 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:25.528 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:25.528 "hdgst": false, 00:12:25.528 "ddgst": false 00:12:25.528 }, 00:12:25.528 "method": "bdev_nvme_attach_controller" 00:12:25.528 }' 00:12:25.528 [2024-04-17 16:23:59.494244] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:12:25.528 [2024-04-17 16:23:59.495186] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74754 ] 00:12:25.818 [2024-04-17 16:23:59.640857] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.818 [2024-04-17 16:23:59.767574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.818 [2024-04-17 16:23:59.776823] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:12:26.076 [2024-04-17 16:23:59.948279] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:12:26.076 Running I/O for 10 seconds... 00:12:36.047 00:12:36.047 Latency(us) 00:12:36.047 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:36.047 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:12:36.047 Verification LBA range: start 0x0 length 0x1000 00:12:36.047 Nvme1n1 : 10.02 5773.63 45.11 0.00 0.00 22098.95 3395.96 33840.41 00:12:36.047 =================================================================================================================== 00:12:36.048 Total : 5773.63 45.11 0.00 0.00 22098.95 3395.96 33840.41 00:12:36.307 16:24:10 -- target/zcopy.sh@39 -- # perfpid=74876 00:12:36.307 16:24:10 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:12:36.307 16:24:10 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:12:36.307 16:24:10 -- nvmf/common.sh@521 -- # config=() 00:12:36.307 16:24:10 -- nvmf/common.sh@521 -- # local subsystem config 00:12:36.307 16:24:10 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:36.307 16:24:10 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:36.307 { 00:12:36.307 "params": { 00:12:36.307 "name": "Nvme$subsystem", 00:12:36.307 "trtype": "$TEST_TRANSPORT", 00:12:36.307 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:36.307 "adrfam": "ipv4", 00:12:36.307 "trsvcid": "$NVMF_PORT", 00:12:36.307 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:36.307 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:36.307 "hdgst": ${hdgst:-false}, 00:12:36.307 "ddgst": ${ddgst:-false} 00:12:36.307 }, 00:12:36.307 "method": "bdev_nvme_attach_controller" 00:12:36.307 } 00:12:36.307 EOF 00:12:36.307 )") 00:12:36.307 16:24:10 -- target/zcopy.sh@41 -- # xtrace_disable 00:12:36.307 16:24:10 -- common/autotest_common.sh@10 -- # set +x 00:12:36.307 16:24:10 -- nvmf/common.sh@543 -- # cat 00:12:36.307 16:24:10 -- nvmf/common.sh@545 -- # jq . 00:12:36.307 [2024-04-17 16:24:10.252119] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.307 [2024-04-17 16:24:10.252165] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.307 16:24:10 -- nvmf/common.sh@546 -- # IFS=, 00:12:36.307 16:24:10 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:36.307 "params": { 00:12:36.307 "name": "Nvme1", 00:12:36.307 "trtype": "tcp", 00:12:36.307 "traddr": "10.0.0.2", 00:12:36.307 "adrfam": "ipv4", 00:12:36.307 "trsvcid": "4420", 00:12:36.307 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:36.307 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:36.307 "hdgst": false, 00:12:36.307 "ddgst": false 00:12:36.307 }, 00:12:36.307 "method": "bdev_nvme_attach_controller" 00:12:36.307 }' 00:12:36.307 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.307 [2024-04-17 16:24:10.264101] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.307 [2024-04-17 16:24:10.264288] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.307 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.307 [2024-04-17 16:24:10.276110] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.307 [2024-04-17 16:24:10.276293] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.307 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.307 [2024-04-17 16:24:10.284089] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.307 [2024-04-17 16:24:10.284247] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.307 [2024-04-17 16:24:10.285740] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:12:36.307 [2024-04-17 16:24:10.285990] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74876 ] 00:12:36.307 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.307 [2024-04-17 16:24:10.292100] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.307 [2024-04-17 16:24:10.292258] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.307 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.307 [2024-04-17 16:24:10.300090] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.307 [2024-04-17 16:24:10.300233] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.307 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.307 [2024-04-17 16:24:10.312086] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.307 [2024-04-17 16:24:10.312229] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.307 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.307 [2024-04-17 16:24:10.320078] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.307 [2024-04-17 16:24:10.320212] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.307 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.307 [2024-04-17 16:24:10.328088] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.307 [2024-04-17 16:24:10.328222] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.307 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.307 [2024-04-17 16:24:10.340097] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.307 [2024-04-17 16:24:10.340246] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.307 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.567 [2024-04-17 16:24:10.352118] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.567 [2024-04-17 16:24:10.352271] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.567 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.567 [2024-04-17 16:24:10.364101] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.567 [2024-04-17 16:24:10.364252] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.567 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.567 [2024-04-17 16:24:10.376111] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.567 [2024-04-17 16:24:10.376250] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.567 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.567 [2024-04-17 16:24:10.388111] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.567 [2024-04-17 16:24:10.388248] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.567 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.567 [2024-04-17 16:24:10.400111] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.567 [2024-04-17 16:24:10.400246] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.567 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.567 [2024-04-17 16:24:10.412113] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.568 [2024-04-17 16:24:10.412257] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.568 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.568 [2024-04-17 16:24:10.420651] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.568 [2024-04-17 16:24:10.424116] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.568 [2024-04-17 16:24:10.424147] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.568 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.568 [2024-04-17 16:24:10.436159] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.568 [2024-04-17 16:24:10.436196] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.568 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.568 [2024-04-17 16:24:10.448152] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.568 [2024-04-17 16:24:10.448185] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.568 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.568 [2024-04-17 16:24:10.460145] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.568 [2024-04-17 16:24:10.460177] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.568 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.568 [2024-04-17 16:24:10.472135] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.568 [2024-04-17 16:24:10.472163] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.568 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.568 [2024-04-17 16:24:10.484159] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.568 [2024-04-17 16:24:10.484194] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.568 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.568 [2024-04-17 16:24:10.496162] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.568 [2024-04-17 16:24:10.496323] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.568 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.568 [2024-04-17 16:24:10.508158] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.568 [2024-04-17 16:24:10.508297] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.568 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.568 [2024-04-17 16:24:10.520161] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.568 [2024-04-17 16:24:10.520301] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.568 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.568 [2024-04-17 16:24:10.532160] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.568 [2024-04-17 16:24:10.532297] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.568 [2024-04-17 16:24:10.534996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.568 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.568 [2024-04-17 16:24:10.543903] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:12:36.568 [2024-04-17 16:24:10.544172] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.568 [2024-04-17 16:24:10.544191] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.568 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.568 [2024-04-17 16:24:10.556202] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.568 [2024-04-17 16:24:10.556243] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.568 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.568 [2024-04-17 16:24:10.568196] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.568 [2024-04-17 16:24:10.568234] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.568 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.568 [2024-04-17 16:24:10.580193] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.568 [2024-04-17 16:24:10.580228] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.568 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.568 [2024-04-17 16:24:10.592198] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.568 [2024-04-17 16:24:10.592363] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.568 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.568 [2024-04-17 16:24:10.604218] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.568 [2024-04-17 16:24:10.604413] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.568 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.827 [2024-04-17 16:24:10.616224] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.828 [2024-04-17 16:24:10.616405] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.828 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.828 [2024-04-17 16:24:10.628206] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.828 [2024-04-17 16:24:10.628351] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.828 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.828 [2024-04-17 16:24:10.640201] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.828 [2024-04-17 16:24:10.640345] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.828 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.828 [2024-04-17 16:24:10.652255] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.828 [2024-04-17 16:24:10.652432] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.828 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.828 [2024-04-17 16:24:10.664261] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.828 [2024-04-17 16:24:10.664431] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.828 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.828 [2024-04-17 16:24:10.672230] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.828 [2024-04-17 16:24:10.672394] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.828 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.828 [2024-04-17 16:24:10.680238] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.828 [2024-04-17 16:24:10.680385] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.828 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.828 [2024-04-17 16:24:10.692247] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.828 [2024-04-17 16:24:10.692409] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.828 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.828 [2024-04-17 16:24:10.704548] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.828 [2024-04-17 16:24:10.704726] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.828 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.828 [2024-04-17 16:24:10.712755] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:12:36.828 Running I/O for 5 seconds... 00:12:36.828 [2024-04-17 16:24:10.716270] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.828 [2024-04-17 16:24:10.716306] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.828 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.828 [2024-04-17 16:24:10.735189] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.828 [2024-04-17 16:24:10.735248] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.828 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.828 [2024-04-17 16:24:10.750652] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.828 [2024-04-17 16:24:10.750700] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.828 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.828 [2024-04-17 16:24:10.761801] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.828 [2024-04-17 16:24:10.761975] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.828 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.828 [2024-04-17 16:24:10.776685] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.828 [2024-04-17 16:24:10.776868] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.828 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.828 [2024-04-17 16:24:10.794379] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.828 [2024-04-17 16:24:10.794553] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.828 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.828 [2024-04-17 16:24:10.810901] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.828 [2024-04-17 16:24:10.811072] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.828 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.828 [2024-04-17 16:24:10.827253] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.828 [2024-04-17 16:24:10.827425] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.828 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.828 [2024-04-17 16:24:10.843829] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.828 [2024-04-17 16:24:10.843983] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.828 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:36.828 [2024-04-17 16:24:10.860329] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:36.828 [2024-04-17 16:24:10.860488] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:36.828 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.087 [2024-04-17 16:24:10.876784] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.087 [2024-04-17 16:24:10.876958] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.087 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.087 [2024-04-17 16:24:10.894299] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.087 [2024-04-17 16:24:10.894339] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.087 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.087 [2024-04-17 16:24:10.905709] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.087 [2024-04-17 16:24:10.905762] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.087 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.087 [2024-04-17 16:24:10.921608] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.087 [2024-04-17 16:24:10.921659] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.087 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.087 [2024-04-17 16:24:10.933500] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.087 [2024-04-17 16:24:10.933659] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.087 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.087 [2024-04-17 16:24:10.948590] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.087 [2024-04-17 16:24:10.948755] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.087 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.088 [2024-04-17 16:24:10.960212] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.088 [2024-04-17 16:24:10.960367] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.088 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.088 [2024-04-17 16:24:10.977921] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.088 [2024-04-17 16:24:10.978080] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.088 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.088 [2024-04-17 16:24:10.994619] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.088 [2024-04-17 16:24:10.994811] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.088 2024/04/17 16:24:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.088 [2024-04-17 16:24:11.010877] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.088 [2024-04-17 16:24:11.011033] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.088 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.088 [2024-04-17 16:24:11.021862] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.088 [2024-04-17 16:24:11.022051] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.088 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.088 [2024-04-17 16:24:11.037065] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.088 [2024-04-17 16:24:11.037113] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.088 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.088 [2024-04-17 16:24:11.054236] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.088 [2024-04-17 16:24:11.054278] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.088 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.088 [2024-04-17 16:24:11.070311] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.088 [2024-04-17 16:24:11.070407] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.088 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.088 [2024-04-17 16:24:11.081365] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.088 [2024-04-17 16:24:11.081529] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.088 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.088 [2024-04-17 16:24:11.092647] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.088 [2024-04-17 16:24:11.092813] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.088 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.088 [2024-04-17 16:24:11.103594] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.088 [2024-04-17 16:24:11.103746] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.088 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.088 [2024-04-17 16:24:11.118163] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.088 [2024-04-17 16:24:11.118321] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.088 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.088 [2024-04-17 16:24:11.129538] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.088 [2024-04-17 16:24:11.129704] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.346 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.346 [2024-04-17 16:24:11.145120] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.346 [2024-04-17 16:24:11.145319] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.346 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.346 [2024-04-17 16:24:11.161687] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.346 [2024-04-17 16:24:11.161877] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.346 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.346 [2024-04-17 16:24:11.177627] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.346 [2024-04-17 16:24:11.177670] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.346 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.346 [2024-04-17 16:24:11.187880] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.346 [2024-04-17 16:24:11.187922] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.346 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.346 [2024-04-17 16:24:11.202595] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.346 [2024-04-17 16:24:11.202638] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.347 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.347 [2024-04-17 16:24:11.218574] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.347 [2024-04-17 16:24:11.218616] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.347 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.347 [2024-04-17 16:24:11.228597] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.347 [2024-04-17 16:24:11.228636] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.347 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.347 [2024-04-17 16:24:11.243973] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.347 [2024-04-17 16:24:11.244018] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.347 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.347 [2024-04-17 16:24:11.260493] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.347 [2024-04-17 16:24:11.260539] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.347 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.347 [2024-04-17 16:24:11.278327] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.347 [2024-04-17 16:24:11.278508] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.347 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.347 [2024-04-17 16:24:11.294528] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.347 [2024-04-17 16:24:11.294692] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.347 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.347 [2024-04-17 16:24:11.310906] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.347 [2024-04-17 16:24:11.311149] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.347 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.347 [2024-04-17 16:24:11.327963] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.347 [2024-04-17 16:24:11.328238] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.347 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.347 [2024-04-17 16:24:11.343953] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.347 [2024-04-17 16:24:11.344223] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.347 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.347 [2024-04-17 16:24:11.355565] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.347 [2024-04-17 16:24:11.355835] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.347 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.347 [2024-04-17 16:24:11.371189] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.347 [2024-04-17 16:24:11.371440] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.347 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.347 [2024-04-17 16:24:11.387819] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.347 [2024-04-17 16:24:11.387874] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.606 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.606 [2024-04-17 16:24:11.404903] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.606 [2024-04-17 16:24:11.404950] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.606 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.606 [2024-04-17 16:24:11.421343] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.606 [2024-04-17 16:24:11.421599] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.606 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.606 [2024-04-17 16:24:11.438227] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.606 [2024-04-17 16:24:11.438496] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.606 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.606 [2024-04-17 16:24:11.454761] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.606 [2024-04-17 16:24:11.454959] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.606 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.606 [2024-04-17 16:24:11.471607] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.606 [2024-04-17 16:24:11.471881] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.606 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.606 [2024-04-17 16:24:11.488198] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.606 [2024-04-17 16:24:11.488435] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.606 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.606 [2024-04-17 16:24:11.498841] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.606 [2024-04-17 16:24:11.498878] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.606 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.606 [2024-04-17 16:24:11.514433] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.606 [2024-04-17 16:24:11.514479] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.606 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.606 [2024-04-17 16:24:11.531985] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.606 [2024-04-17 16:24:11.532023] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.606 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.606 [2024-04-17 16:24:11.548711] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.606 [2024-04-17 16:24:11.548760] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.606 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.606 [2024-04-17 16:24:11.565581] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.606 [2024-04-17 16:24:11.565741] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.606 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.606 [2024-04-17 16:24:11.576739] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.606 [2024-04-17 16:24:11.576927] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.606 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.606 [2024-04-17 16:24:11.592541] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.606 [2024-04-17 16:24:11.592691] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.606 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.606 [2024-04-17 16:24:11.609243] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.606 [2024-04-17 16:24:11.609404] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.606 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.606 [2024-04-17 16:24:11.624835] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.606 [2024-04-17 16:24:11.624993] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.606 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.606 [2024-04-17 16:24:11.641354] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.607 [2024-04-17 16:24:11.641517] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.607 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.866 [2024-04-17 16:24:11.657467] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.866 [2024-04-17 16:24:11.657634] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.866 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.866 [2024-04-17 16:24:11.674192] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.866 [2024-04-17 16:24:11.674353] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.866 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.866 [2024-04-17 16:24:11.685169] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.866 [2024-04-17 16:24:11.685206] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.866 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.866 [2024-04-17 16:24:11.700498] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.866 [2024-04-17 16:24:11.700537] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.866 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.866 [2024-04-17 16:24:11.716244] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.866 [2024-04-17 16:24:11.716286] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.866 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.866 [2024-04-17 16:24:11.726431] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.866 [2024-04-17 16:24:11.726598] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.866 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.866 [2024-04-17 16:24:11.741552] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.866 [2024-04-17 16:24:11.741713] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.866 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.866 [2024-04-17 16:24:11.752848] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.866 [2024-04-17 16:24:11.753011] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.866 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.866 [2024-04-17 16:24:11.768183] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.866 [2024-04-17 16:24:11.768378] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.866 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.866 [2024-04-17 16:24:11.785229] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.866 [2024-04-17 16:24:11.785406] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.866 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.866 [2024-04-17 16:24:11.796141] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.866 [2024-04-17 16:24:11.796291] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.866 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.866 [2024-04-17 16:24:11.811181] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.866 [2024-04-17 16:24:11.811335] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.866 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.866 [2024-04-17 16:24:11.821843] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.866 [2024-04-17 16:24:11.821880] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.866 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.866 [2024-04-17 16:24:11.837748] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.866 [2024-04-17 16:24:11.837821] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.866 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.866 [2024-04-17 16:24:11.854065] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.866 [2024-04-17 16:24:11.854108] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.866 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.866 [2024-04-17 16:24:11.870368] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.866 [2024-04-17 16:24:11.870410] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.866 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.866 [2024-04-17 16:24:11.886352] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.866 [2024-04-17 16:24:11.886540] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.866 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:37.866 [2024-04-17 16:24:11.897668] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.866 [2024-04-17 16:24:11.897947] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.867 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.126 [2024-04-17 16:24:11.913643] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.126 [2024-04-17 16:24:11.913908] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.126 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.126 [2024-04-17 16:24:11.930243] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.126 [2024-04-17 16:24:11.930563] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.126 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.126 [2024-04-17 16:24:11.947549] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.126 [2024-04-17 16:24:11.947713] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.126 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.126 [2024-04-17 16:24:11.964071] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.126 [2024-04-17 16:24:11.964270] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.126 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.126 [2024-04-17 16:24:11.980920] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.126 [2024-04-17 16:24:11.981085] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.126 2024/04/17 16:24:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.126 [2024-04-17 16:24:11.996932] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.126 [2024-04-17 16:24:11.996969] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.126 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.126 [2024-04-17 16:24:12.014456] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.126 [2024-04-17 16:24:12.014493] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.126 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.126 [2024-04-17 16:24:12.031695] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.126 [2024-04-17 16:24:12.031736] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.126 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.126 [2024-04-17 16:24:12.048358] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.126 [2024-04-17 16:24:12.048550] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.126 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.126 [2024-04-17 16:24:12.064288] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.126 [2024-04-17 16:24:12.064452] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.126 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.126 [2024-04-17 16:24:12.082036] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.126 [2024-04-17 16:24:12.082186] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.126 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.126 [2024-04-17 16:24:12.098005] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.126 [2024-04-17 16:24:12.098155] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.126 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.126 [2024-04-17 16:24:12.108998] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.126 [2024-04-17 16:24:12.109171] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.126 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.126 [2024-04-17 16:24:12.124495] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.126 [2024-04-17 16:24:12.124649] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.126 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.126 [2024-04-17 16:24:12.135577] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.126 [2024-04-17 16:24:12.135741] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.126 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.126 [2024-04-17 16:24:12.146978] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.126 [2024-04-17 16:24:12.147129] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.126 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.126 [2024-04-17 16:24:12.160402] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.126 [2024-04-17 16:24:12.160566] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.126 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.386 [2024-04-17 16:24:12.176358] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.386 [2024-04-17 16:24:12.176396] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.386 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.386 [2024-04-17 16:24:12.194162] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.386 [2024-04-17 16:24:12.194214] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.386 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.386 [2024-04-17 16:24:12.209850] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.386 [2024-04-17 16:24:12.209886] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.386 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.386 [2024-04-17 16:24:12.226533] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.386 [2024-04-17 16:24:12.226720] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.386 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.386 [2024-04-17 16:24:12.237346] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.386 [2024-04-17 16:24:12.237532] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.386 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.386 [2024-04-17 16:24:12.252235] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.386 [2024-04-17 16:24:12.252414] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.386 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.386 [2024-04-17 16:24:12.269161] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.386 [2024-04-17 16:24:12.269313] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.386 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.386 [2024-04-17 16:24:12.285544] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.386 [2024-04-17 16:24:12.285737] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.386 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.386 [2024-04-17 16:24:12.301867] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.386 [2024-04-17 16:24:12.302046] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.386 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.386 [2024-04-17 16:24:12.313083] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.386 [2024-04-17 16:24:12.313249] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.386 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.386 [2024-04-17 16:24:12.328529] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.386 [2024-04-17 16:24:12.328715] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.386 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.386 [2024-04-17 16:24:12.340059] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.386 [2024-04-17 16:24:12.340243] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.386 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.386 [2024-04-17 16:24:12.351921] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.386 [2024-04-17 16:24:12.351956] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.386 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.386 [2024-04-17 16:24:12.363619] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.386 [2024-04-17 16:24:12.363655] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.386 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.386 [2024-04-17 16:24:12.380141] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.386 [2024-04-17 16:24:12.380178] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.386 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.386 [2024-04-17 16:24:12.398066] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.386 [2024-04-17 16:24:12.398112] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.386 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.386 [2024-04-17 16:24:12.414088] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.386 [2024-04-17 16:24:12.414286] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.386 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.386 [2024-04-17 16:24:12.425438] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.386 [2024-04-17 16:24:12.425590] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.386 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.645 [2024-04-17 16:24:12.437168] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.646 [2024-04-17 16:24:12.437327] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.646 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.646 [2024-04-17 16:24:12.448620] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.646 [2024-04-17 16:24:12.448788] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.646 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.646 [2024-04-17 16:24:12.464358] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.646 [2024-04-17 16:24:12.464526] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.646 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.646 [2024-04-17 16:24:12.474836] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.646 [2024-04-17 16:24:12.475064] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.646 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.646 [2024-04-17 16:24:12.487083] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.646 [2024-04-17 16:24:12.487248] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.646 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.646 [2024-04-17 16:24:12.499117] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.646 [2024-04-17 16:24:12.499270] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.646 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.646 [2024-04-17 16:24:12.510426] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.646 [2024-04-17 16:24:12.510580] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.646 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.646 [2024-04-17 16:24:12.521865] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.646 [2024-04-17 16:24:12.521904] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.646 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.646 [2024-04-17 16:24:12.536905] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.646 [2024-04-17 16:24:12.536947] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.646 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.646 [2024-04-17 16:24:12.553310] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.646 [2024-04-17 16:24:12.553358] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.646 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.646 [2024-04-17 16:24:12.570442] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.646 [2024-04-17 16:24:12.570621] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.646 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.646 [2024-04-17 16:24:12.586826] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.646 [2024-04-17 16:24:12.587110] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.646 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.646 [2024-04-17 16:24:12.603474] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.646 [2024-04-17 16:24:12.603657] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.646 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.646 [2024-04-17 16:24:12.620866] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.646 [2024-04-17 16:24:12.621039] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.646 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.646 [2024-04-17 16:24:12.636840] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.646 [2024-04-17 16:24:12.637005] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.646 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.646 [2024-04-17 16:24:12.653505] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.646 [2024-04-17 16:24:12.653798] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.646 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.646 [2024-04-17 16:24:12.669892] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.646 [2024-04-17 16:24:12.670057] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.646 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.646 [2024-04-17 16:24:12.680236] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.646 [2024-04-17 16:24:12.680385] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.646 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.905 [2024-04-17 16:24:12.696033] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.905 [2024-04-17 16:24:12.696085] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.905 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.905 [2024-04-17 16:24:12.712049] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.905 [2024-04-17 16:24:12.712093] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.905 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.905 [2024-04-17 16:24:12.730009] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.905 [2024-04-17 16:24:12.730054] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.905 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.905 [2024-04-17 16:24:12.745949] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.905 [2024-04-17 16:24:12.746135] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.905 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.906 [2024-04-17 16:24:12.762148] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.906 [2024-04-17 16:24:12.762313] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.906 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.906 [2024-04-17 16:24:12.771729] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.906 [2024-04-17 16:24:12.771902] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.906 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.906 [2024-04-17 16:24:12.786833] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.906 [2024-04-17 16:24:12.786999] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.906 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.906 [2024-04-17 16:24:12.797390] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.906 [2024-04-17 16:24:12.797558] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.906 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.906 [2024-04-17 16:24:12.813048] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.906 [2024-04-17 16:24:12.813235] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.906 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.906 [2024-04-17 16:24:12.824185] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.906 [2024-04-17 16:24:12.824376] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.906 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.906 [2024-04-17 16:24:12.839366] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.906 [2024-04-17 16:24:12.839533] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.906 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.906 [2024-04-17 16:24:12.850014] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.906 [2024-04-17 16:24:12.850051] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.906 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.906 [2024-04-17 16:24:12.865300] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.906 [2024-04-17 16:24:12.865343] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.906 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.906 [2024-04-17 16:24:12.881401] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.906 [2024-04-17 16:24:12.881442] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.906 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.906 [2024-04-17 16:24:12.898131] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.906 [2024-04-17 16:24:12.898175] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.906 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.906 [2024-04-17 16:24:12.913819] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.906 [2024-04-17 16:24:12.913859] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.906 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.906 [2024-04-17 16:24:12.929722] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.906 [2024-04-17 16:24:12.929765] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.906 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:38.906 [2024-04-17 16:24:12.947031] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.906 [2024-04-17 16:24:12.947084] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.203 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.203 [2024-04-17 16:24:12.962977] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.203 [2024-04-17 16:24:12.963046] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.203 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.203 [2024-04-17 16:24:12.979928] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.203 [2024-04-17 16:24:12.979984] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.203 2024/04/17 16:24:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.203 [2024-04-17 16:24:12.996991] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.203 [2024-04-17 16:24:12.997049] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.203 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.203 [2024-04-17 16:24:13.013895] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.203 [2024-04-17 16:24:13.013956] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.203 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.203 [2024-04-17 16:24:13.030411] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.203 [2024-04-17 16:24:13.030460] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.203 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.203 [2024-04-17 16:24:13.041122] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.203 [2024-04-17 16:24:13.041160] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.203 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.203 [2024-04-17 16:24:13.056515] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.203 [2024-04-17 16:24:13.056569] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.203 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.204 [2024-04-17 16:24:13.072682] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.204 [2024-04-17 16:24:13.072741] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.204 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.204 [2024-04-17 16:24:13.083663] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.204 [2024-04-17 16:24:13.083705] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.204 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.204 [2024-04-17 16:24:13.098816] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.204 [2024-04-17 16:24:13.098856] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.204 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.204 [2024-04-17 16:24:13.118114] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.204 [2024-04-17 16:24:13.118179] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.204 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.204 [2024-04-17 16:24:13.132918] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.204 [2024-04-17 16:24:13.132968] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.204 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.204 [2024-04-17 16:24:13.148601] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.204 [2024-04-17 16:24:13.148651] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.204 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.204 [2024-04-17 16:24:13.165260] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.204 [2024-04-17 16:24:13.165304] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.204 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.204 [2024-04-17 16:24:13.181968] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.204 [2024-04-17 16:24:13.182014] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.204 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.204 [2024-04-17 16:24:13.192202] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.204 [2024-04-17 16:24:13.192251] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.204 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.204 [2024-04-17 16:24:13.207626] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.204 [2024-04-17 16:24:13.207676] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.204 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.204 [2024-04-17 16:24:13.223716] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.204 [2024-04-17 16:24:13.223765] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.204 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.474 [2024-04-17 16:24:13.240182] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.474 [2024-04-17 16:24:13.240237] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.474 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.474 [2024-04-17 16:24:13.257213] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.474 [2024-04-17 16:24:13.257262] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.474 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.474 [2024-04-17 16:24:13.273376] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.474 [2024-04-17 16:24:13.273427] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.474 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.474 [2024-04-17 16:24:13.283929] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.474 [2024-04-17 16:24:13.283974] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.474 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.474 [2024-04-17 16:24:13.294768] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.474 [2024-04-17 16:24:13.294825] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.474 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.474 [2024-04-17 16:24:13.310881] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.474 [2024-04-17 16:24:13.310927] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.474 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.475 [2024-04-17 16:24:13.327041] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.475 [2024-04-17 16:24:13.327093] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.475 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.475 [2024-04-17 16:24:13.343008] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.475 [2024-04-17 16:24:13.343051] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.475 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.475 [2024-04-17 16:24:13.353736] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.475 [2024-04-17 16:24:13.353789] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.475 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.475 [2024-04-17 16:24:13.368600] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.475 [2024-04-17 16:24:13.368650] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.475 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.475 [2024-04-17 16:24:13.378480] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.475 [2024-04-17 16:24:13.378521] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.475 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.475 [2024-04-17 16:24:13.393027] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.475 [2024-04-17 16:24:13.393068] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.475 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.475 [2024-04-17 16:24:13.403983] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.475 [2024-04-17 16:24:13.404026] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.475 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.475 [2024-04-17 16:24:13.415609] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.475 [2024-04-17 16:24:13.415660] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.475 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.475 [2024-04-17 16:24:13.430473] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.475 [2024-04-17 16:24:13.430523] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.475 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.475 [2024-04-17 16:24:13.446982] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.475 [2024-04-17 16:24:13.447032] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.475 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.475 [2024-04-17 16:24:13.463424] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.475 [2024-04-17 16:24:13.463477] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.475 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.475 [2024-04-17 16:24:13.479589] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.475 [2024-04-17 16:24:13.479649] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.475 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.475 [2024-04-17 16:24:13.497031] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.475 [2024-04-17 16:24:13.497082] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.475 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.475 [2024-04-17 16:24:13.512664] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.475 [2024-04-17 16:24:13.512711] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.475 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.757 [2024-04-17 16:24:13.530018] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.757 [2024-04-17 16:24:13.530072] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.757 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.757 [2024-04-17 16:24:13.546989] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.757 [2024-04-17 16:24:13.547045] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.757 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.757 [2024-04-17 16:24:13.563737] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.757 [2024-04-17 16:24:13.563807] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.757 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.757 [2024-04-17 16:24:13.580373] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.757 [2024-04-17 16:24:13.580426] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.757 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.757 [2024-04-17 16:24:13.596943] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.757 [2024-04-17 16:24:13.596997] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.757 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.757 [2024-04-17 16:24:13.613936] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.757 [2024-04-17 16:24:13.613976] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.757 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.757 [2024-04-17 16:24:13.625320] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.757 [2024-04-17 16:24:13.625370] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.757 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.757 [2024-04-17 16:24:13.637918] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.757 [2024-04-17 16:24:13.637961] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.757 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.757 [2024-04-17 16:24:13.653670] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.757 [2024-04-17 16:24:13.653713] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.757 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.757 [2024-04-17 16:24:13.665579] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.757 [2024-04-17 16:24:13.665620] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.757 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.757 [2024-04-17 16:24:13.678159] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.757 [2024-04-17 16:24:13.678201] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.757 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.757 [2024-04-17 16:24:13.694595] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.757 [2024-04-17 16:24:13.694647] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.757 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.757 [2024-04-17 16:24:13.713383] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.757 [2024-04-17 16:24:13.713446] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.757 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.757 [2024-04-17 16:24:13.731731] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.757 [2024-04-17 16:24:13.731802] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.757 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.757 [2024-04-17 16:24:13.749032] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.757 [2024-04-17 16:24:13.749081] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.757 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.757 [2024-04-17 16:24:13.767460] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.757 [2024-04-17 16:24:13.767521] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.757 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:39.757 [2024-04-17 16:24:13.784889] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.757 [2024-04-17 16:24:13.784943] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.021 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.021 [2024-04-17 16:24:13.802787] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.021 [2024-04-17 16:24:13.802844] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.021 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.021 [2024-04-17 16:24:13.820328] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.021 [2024-04-17 16:24:13.820386] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.021 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.021 [2024-04-17 16:24:13.832280] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.021 [2024-04-17 16:24:13.832330] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.021 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.021 [2024-04-17 16:24:13.849897] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.021 [2024-04-17 16:24:13.849949] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.021 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.021 [2024-04-17 16:24:13.864066] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.021 [2024-04-17 16:24:13.864120] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.021 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.021 [2024-04-17 16:24:13.878160] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.021 [2024-04-17 16:24:13.878213] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.021 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.022 [2024-04-17 16:24:13.895976] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.022 [2024-04-17 16:24:13.896026] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.022 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.022 [2024-04-17 16:24:13.912469] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.022 [2024-04-17 16:24:13.912520] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.022 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.022 [2024-04-17 16:24:13.930353] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.022 [2024-04-17 16:24:13.930397] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.022 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.022 [2024-04-17 16:24:13.946169] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.022 [2024-04-17 16:24:13.946215] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.022 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.022 [2024-04-17 16:24:13.957135] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.022 [2024-04-17 16:24:13.957185] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.022 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.022 [2024-04-17 16:24:13.968376] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.022 [2024-04-17 16:24:13.968415] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.022 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.022 [2024-04-17 16:24:13.981948] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.022 [2024-04-17 16:24:13.981992] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.022 2024/04/17 16:24:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.022 [2024-04-17 16:24:13.998127] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.022 [2024-04-17 16:24:13.998179] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.022 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.022 [2024-04-17 16:24:14.015129] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.022 [2024-04-17 16:24:14.015172] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.022 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.022 [2024-04-17 16:24:14.031375] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.022 [2024-04-17 16:24:14.031417] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.022 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.022 [2024-04-17 16:24:14.047215] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.022 [2024-04-17 16:24:14.047256] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.022 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.280 [2024-04-17 16:24:14.066504] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.280 [2024-04-17 16:24:14.066545] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.280 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.280 [2024-04-17 16:24:14.081588] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.280 [2024-04-17 16:24:14.081633] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.280 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.280 [2024-04-17 16:24:14.092140] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.280 [2024-04-17 16:24:14.092179] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.280 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.280 [2024-04-17 16:24:14.103305] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.280 [2024-04-17 16:24:14.103344] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.280 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.280 [2024-04-17 16:24:14.120263] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.280 [2024-04-17 16:24:14.120300] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.280 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.280 [2024-04-17 16:24:14.136561] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.280 [2024-04-17 16:24:14.136602] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.280 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.280 [2024-04-17 16:24:14.147348] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.280 [2024-04-17 16:24:14.147387] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.280 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.280 [2024-04-17 16:24:14.162804] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.280 [2024-04-17 16:24:14.162847] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.280 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.280 [2024-04-17 16:24:14.172686] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.280 [2024-04-17 16:24:14.172723] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.280 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.280 [2024-04-17 16:24:14.187188] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.280 [2024-04-17 16:24:14.187227] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.281 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.281 [2024-04-17 16:24:14.198158] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.281 [2024-04-17 16:24:14.198197] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.281 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.281 [2024-04-17 16:24:14.213641] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.281 [2024-04-17 16:24:14.213686] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.281 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.281 [2024-04-17 16:24:14.229319] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.281 [2024-04-17 16:24:14.229363] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.281 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.281 [2024-04-17 16:24:14.245499] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.281 [2024-04-17 16:24:14.245543] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.281 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.281 [2024-04-17 16:24:14.262418] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.281 [2024-04-17 16:24:14.262467] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.281 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.281 [2024-04-17 16:24:14.278737] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.281 [2024-04-17 16:24:14.278797] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.281 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.281 [2024-04-17 16:24:14.295486] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.281 [2024-04-17 16:24:14.295530] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.281 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.281 [2024-04-17 16:24:14.306579] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.281 [2024-04-17 16:24:14.306629] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.281 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.281 [2024-04-17 16:24:14.317905] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.281 [2024-04-17 16:24:14.317944] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.281 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.538 [2024-04-17 16:24:14.330949] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.538 [2024-04-17 16:24:14.330989] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.538 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.538 [2024-04-17 16:24:14.348060] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.538 [2024-04-17 16:24:14.348111] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.538 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.538 [2024-04-17 16:24:14.363736] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.538 [2024-04-17 16:24:14.363798] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.538 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.538 [2024-04-17 16:24:14.375155] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.538 [2024-04-17 16:24:14.375210] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.538 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.538 [2024-04-17 16:24:14.390122] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.538 [2024-04-17 16:24:14.390161] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.538 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.538 [2024-04-17 16:24:14.402347] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.538 [2024-04-17 16:24:14.402388] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.538 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.538 [2024-04-17 16:24:14.413594] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.538 [2024-04-17 16:24:14.413632] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.538 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.538 [2024-04-17 16:24:14.425558] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.538 [2024-04-17 16:24:14.425599] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.538 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.538 [2024-04-17 16:24:14.441242] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.538 [2024-04-17 16:24:14.441282] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.538 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.538 [2024-04-17 16:24:14.457516] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.538 [2024-04-17 16:24:14.457557] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.538 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.538 [2024-04-17 16:24:14.467766] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.538 [2024-04-17 16:24:14.467829] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.538 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.538 [2024-04-17 16:24:14.479779] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.538 [2024-04-17 16:24:14.479846] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.538 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.538 [2024-04-17 16:24:14.495216] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.538 [2024-04-17 16:24:14.495267] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.538 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.538 [2024-04-17 16:24:14.511509] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.538 [2024-04-17 16:24:14.511575] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.538 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.538 [2024-04-17 16:24:14.522592] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.538 [2024-04-17 16:24:14.522632] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.538 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.538 [2024-04-17 16:24:14.534443] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.538 [2024-04-17 16:24:14.534482] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.538 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.538 [2024-04-17 16:24:14.546762] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.538 [2024-04-17 16:24:14.546829] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.538 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.538 [2024-04-17 16:24:14.562063] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.538 [2024-04-17 16:24:14.562101] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.538 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.538 [2024-04-17 16:24:14.573509] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.538 [2024-04-17 16:24:14.573546] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.538 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.796 [2024-04-17 16:24:14.584907] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.796 [2024-04-17 16:24:14.584949] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.796 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.796 [2024-04-17 16:24:14.601219] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.796 [2024-04-17 16:24:14.601277] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.796 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.796 [2024-04-17 16:24:14.617228] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.796 [2024-04-17 16:24:14.617285] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.797 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.797 [2024-04-17 16:24:14.627168] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.797 [2024-04-17 16:24:14.627214] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.797 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.797 [2024-04-17 16:24:14.639480] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.797 [2024-04-17 16:24:14.639543] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.797 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.797 [2024-04-17 16:24:14.654907] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.797 [2024-04-17 16:24:14.654960] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.797 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.797 [2024-04-17 16:24:14.671216] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.797 [2024-04-17 16:24:14.671261] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.797 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.797 [2024-04-17 16:24:14.681919] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.797 [2024-04-17 16:24:14.681962] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.797 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.797 [2024-04-17 16:24:14.697446] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.797 [2024-04-17 16:24:14.697509] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.797 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.797 [2024-04-17 16:24:14.713962] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.797 [2024-04-17 16:24:14.714013] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.797 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.797 [2024-04-17 16:24:14.731679] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.797 [2024-04-17 16:24:14.731728] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.797 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.797 [2024-04-17 16:24:14.748338] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.797 [2024-04-17 16:24:14.748394] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.797 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.797 [2024-04-17 16:24:14.764259] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.797 [2024-04-17 16:24:14.764310] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.797 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.797 [2024-04-17 16:24:14.781367] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.797 [2024-04-17 16:24:14.781408] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.797 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.797 [2024-04-17 16:24:14.796956] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.797 [2024-04-17 16:24:14.796997] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.797 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.797 [2024-04-17 16:24:14.807539] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.797 [2024-04-17 16:24:14.807577] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.797 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.797 [2024-04-17 16:24:14.818407] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.797 [2024-04-17 16:24:14.818446] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.797 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:40.797 [2024-04-17 16:24:14.830052] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.797 [2024-04-17 16:24:14.830090] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.797 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.056 [2024-04-17 16:24:14.841360] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.056 [2024-04-17 16:24:14.841397] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.056 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.056 [2024-04-17 16:24:14.856284] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.056 [2024-04-17 16:24:14.856323] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.056 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.056 [2024-04-17 16:24:14.867109] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.056 [2024-04-17 16:24:14.867146] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.056 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.056 [2024-04-17 16:24:14.878342] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.056 [2024-04-17 16:24:14.878383] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.056 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.056 [2024-04-17 16:24:14.891225] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.056 [2024-04-17 16:24:14.891264] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.056 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.056 [2024-04-17 16:24:14.908575] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.056 [2024-04-17 16:24:14.908613] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.056 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.056 [2024-04-17 16:24:14.924402] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.056 [2024-04-17 16:24:14.924443] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.056 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.056 [2024-04-17 16:24:14.940041] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.056 [2024-04-17 16:24:14.940078] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.056 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.056 [2024-04-17 16:24:14.950447] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.056 [2024-04-17 16:24:14.950483] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.056 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.056 [2024-04-17 16:24:14.962176] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.056 [2024-04-17 16:24:14.962213] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.057 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.057 [2024-04-17 16:24:14.973882] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.057 [2024-04-17 16:24:14.973917] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.057 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.057 [2024-04-17 16:24:14.990646] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.057 [2024-04-17 16:24:14.990686] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.057 2024/04/17 16:24:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.057 [2024-04-17 16:24:15.003687] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.057 [2024-04-17 16:24:15.003744] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.057 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.057 [2024-04-17 16:24:15.019563] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.057 [2024-04-17 16:24:15.019606] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.057 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.057 [2024-04-17 16:24:15.036863] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.057 [2024-04-17 16:24:15.036924] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.057 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.057 [2024-04-17 16:24:15.053060] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.057 [2024-04-17 16:24:15.053104] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.057 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.057 [2024-04-17 16:24:15.070033] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.057 [2024-04-17 16:24:15.070072] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.057 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.057 [2024-04-17 16:24:15.086626] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.057 [2024-04-17 16:24:15.086665] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.057 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.315 [2024-04-17 16:24:15.104434] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.315 [2024-04-17 16:24:15.104485] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.315 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.315 [2024-04-17 16:24:15.122183] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.315 [2024-04-17 16:24:15.122226] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.315 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.315 [2024-04-17 16:24:15.138865] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.315 [2024-04-17 16:24:15.138902] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.316 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.316 [2024-04-17 16:24:15.154993] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.316 [2024-04-17 16:24:15.155041] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.316 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.316 [2024-04-17 16:24:15.174317] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.316 [2024-04-17 16:24:15.174376] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.316 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.316 [2024-04-17 16:24:15.190059] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.316 [2024-04-17 16:24:15.190114] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.316 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.316 [2024-04-17 16:24:15.206116] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.316 [2024-04-17 16:24:15.206171] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.316 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.316 [2024-04-17 16:24:15.216144] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.316 [2024-04-17 16:24:15.216184] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.316 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.316 [2024-04-17 16:24:15.230842] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.316 [2024-04-17 16:24:15.230885] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.316 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.316 [2024-04-17 16:24:15.248441] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.316 [2024-04-17 16:24:15.248485] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.316 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.316 [2024-04-17 16:24:15.263446] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.316 [2024-04-17 16:24:15.263488] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.316 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.316 [2024-04-17 16:24:15.280971] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.316 [2024-04-17 16:24:15.281015] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.316 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.316 [2024-04-17 16:24:15.296997] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.316 [2024-04-17 16:24:15.297043] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.316 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.316 [2024-04-17 16:24:15.312915] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.316 [2024-04-17 16:24:15.312964] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.316 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.316 [2024-04-17 16:24:15.323022] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.316 [2024-04-17 16:24:15.323064] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.316 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.316 [2024-04-17 16:24:15.334529] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.316 [2024-04-17 16:24:15.334574] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.316 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.316 [2024-04-17 16:24:15.349512] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.316 [2024-04-17 16:24:15.349557] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.316 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.575 [2024-04-17 16:24:15.365525] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.575 [2024-04-17 16:24:15.365571] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.575 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.575 [2024-04-17 16:24:15.382924] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.575 [2024-04-17 16:24:15.382965] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.575 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.575 [2024-04-17 16:24:15.398802] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.575 [2024-04-17 16:24:15.398846] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.575 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.575 [2024-04-17 16:24:15.408681] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.575 [2024-04-17 16:24:15.408728] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.575 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.575 [2024-04-17 16:24:15.423434] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.575 [2024-04-17 16:24:15.423474] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.575 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.575 [2024-04-17 16:24:15.440480] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.575 [2024-04-17 16:24:15.440519] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.575 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.575 [2024-04-17 16:24:15.456726] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.575 [2024-04-17 16:24:15.456799] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.575 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.575 [2024-04-17 16:24:15.473735] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.575 [2024-04-17 16:24:15.473786] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.575 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.575 [2024-04-17 16:24:15.489033] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.575 [2024-04-17 16:24:15.489074] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.575 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.575 [2024-04-17 16:24:15.505189] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.575 [2024-04-17 16:24:15.505230] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.575 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.575 [2024-04-17 16:24:15.524992] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.575 [2024-04-17 16:24:15.525079] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.575 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.575 [2024-04-17 16:24:15.544213] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.575 [2024-04-17 16:24:15.544294] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.575 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.575 [2024-04-17 16:24:15.566793] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.575 [2024-04-17 16:24:15.566907] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.575 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.575 [2024-04-17 16:24:15.587822] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.575 [2024-04-17 16:24:15.587929] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.575 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.575 [2024-04-17 16:24:15.609549] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.575 [2024-04-17 16:24:15.609628] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.575 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.834 [2024-04-17 16:24:15.630753] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.834 [2024-04-17 16:24:15.630859] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.834 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.834 [2024-04-17 16:24:15.652101] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.834 [2024-04-17 16:24:15.652180] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.834 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.834 [2024-04-17 16:24:15.676080] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.834 [2024-04-17 16:24:15.676162] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.834 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.834 [2024-04-17 16:24:15.701633] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.834 [2024-04-17 16:24:15.701732] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.834 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.834 [2024-04-17 16:24:15.724760] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.834 [2024-04-17 16:24:15.724876] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.834 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.834 [2024-04-17 16:24:15.739805] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.834 [2024-04-17 16:24:15.739891] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.834 00:12:41.834 Latency(us) 00:12:41.834 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:41.834 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:12:41.834 Nvme1n1 : 5.02 10723.21 83.78 0.00 0.00 11907.96 5213.09 38130.04 00:12:41.834 =================================================================================================================== 00:12:41.834 Total : 10723.21 83.78 0.00 0.00 11907.96 5213.09 38130.04 00:12:41.834 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.834 [2024-04-17 16:24:15.750704] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.834 [2024-04-17 16:24:15.750801] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.834 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.834 [2024-04-17 16:24:15.762712] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.834 [2024-04-17 16:24:15.762816] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.834 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.834 [2024-04-17 16:24:15.774679] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.834 [2024-04-17 16:24:15.774764] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.834 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.834 [2024-04-17 16:24:15.786749] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.834 [2024-04-17 16:24:15.786856] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.834 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.834 [2024-04-17 16:24:15.798809] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.835 [2024-04-17 16:24:15.798895] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.835 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.835 [2024-04-17 16:24:15.810709] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.835 [2024-04-17 16:24:15.810759] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.835 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.835 [2024-04-17 16:24:15.822843] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.835 [2024-04-17 16:24:15.822973] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.835 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.835 [2024-04-17 16:24:15.834836] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.835 [2024-04-17 16:24:15.834932] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.835 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.835 [2024-04-17 16:24:15.846851] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.835 [2024-04-17 16:24:15.846944] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.835 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.835 [2024-04-17 16:24:15.858870] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.835 [2024-04-17 16:24:15.858968] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.835 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:41.835 [2024-04-17 16:24:15.870868] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.835 [2024-04-17 16:24:15.870963] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.835 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.094 [2024-04-17 16:24:15.882833] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.094 [2024-04-17 16:24:15.882909] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.094 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.094 [2024-04-17 16:24:15.894706] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.094 [2024-04-17 16:24:15.894745] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.094 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.094 [2024-04-17 16:24:15.906705] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.094 [2024-04-17 16:24:15.906916] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.094 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.094 [2024-04-17 16:24:15.918736] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.094 [2024-04-17 16:24:15.918981] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.094 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.094 [2024-04-17 16:24:15.930732] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.094 [2024-04-17 16:24:15.930958] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.094 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.094 [2024-04-17 16:24:15.942730] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.094 [2024-04-17 16:24:15.942914] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.094 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.094 [2024-04-17 16:24:15.954730] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.094 [2024-04-17 16:24:15.954899] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.094 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.094 [2024-04-17 16:24:15.966739] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.094 [2024-04-17 16:24:15.966967] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.094 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.094 [2024-04-17 16:24:15.978757] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.094 [2024-04-17 16:24:15.979002] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.094 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.094 [2024-04-17 16:24:15.990740] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.094 [2024-04-17 16:24:15.990792] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.094 2024/04/17 16:24:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.094 [2024-04-17 16:24:16.002726] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.094 [2024-04-17 16:24:16.002757] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.094 2024/04/17 16:24:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:42.094 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (74876) - No such process 00:12:42.094 16:24:16 -- target/zcopy.sh@49 -- # wait 74876 00:12:42.094 16:24:16 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.094 16:24:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:42.094 16:24:16 -- common/autotest_common.sh@10 -- # set +x 00:12:42.094 16:24:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:42.094 16:24:16 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:42.094 16:24:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:42.094 16:24:16 -- common/autotest_common.sh@10 -- # set +x 00:12:42.094 delay0 00:12:42.094 16:24:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:42.094 16:24:16 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:12:42.094 16:24:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:42.094 16:24:16 -- common/autotest_common.sh@10 -- # set +x 00:12:42.094 16:24:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:42.094 16:24:16 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:12:42.352 [2024-04-17 16:24:16.192331] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:48.917 Initializing NVMe Controllers 00:12:48.917 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:48.917 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:48.917 Initialization complete. Launching workers. 00:12:48.917 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 80 00:12:48.917 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 367, failed to submit 33 00:12:48.917 success 166, unsuccess 201, failed 0 00:12:48.917 16:24:22 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:12:48.917 16:24:22 -- target/zcopy.sh@60 -- # nvmftestfini 00:12:48.917 16:24:22 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:48.917 16:24:22 -- nvmf/common.sh@117 -- # sync 00:12:48.917 16:24:22 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:48.917 16:24:22 -- nvmf/common.sh@120 -- # set +e 00:12:48.917 16:24:22 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:48.917 16:24:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:48.917 rmmod nvme_tcp 00:12:48.917 rmmod nvme_fabrics 00:12:48.917 rmmod nvme_keyring 00:12:48.917 16:24:22 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:48.917 16:24:22 -- nvmf/common.sh@124 -- # set -e 00:12:48.917 16:24:22 -- nvmf/common.sh@125 -- # return 0 00:12:48.917 16:24:22 -- nvmf/common.sh@478 -- # '[' -n 74703 ']' 00:12:48.917 16:24:22 -- nvmf/common.sh@479 -- # killprocess 74703 00:12:48.917 16:24:22 -- common/autotest_common.sh@936 -- # '[' -z 74703 ']' 00:12:48.917 16:24:22 -- common/autotest_common.sh@940 -- # kill -0 74703 00:12:48.917 16:24:22 -- common/autotest_common.sh@941 -- # uname 00:12:48.917 16:24:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:48.917 16:24:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74703 00:12:48.917 killing process with pid 74703 00:12:48.917 16:24:22 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:48.917 16:24:22 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:48.917 16:24:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74703' 00:12:48.917 16:24:22 -- common/autotest_common.sh@955 -- # kill 74703 00:12:48.917 16:24:22 -- common/autotest_common.sh@960 -- # wait 74703 00:12:48.917 16:24:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:48.917 16:24:22 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:48.917 16:24:22 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:48.917 16:24:22 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:48.917 16:24:22 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:48.917 16:24:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.918 16:24:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:48.918 16:24:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.918 16:24:22 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:48.918 00:12:48.918 real 0m24.935s 00:12:48.918 user 0m40.473s 00:12:48.918 sys 0m6.577s 00:12:48.918 16:24:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:48.918 16:24:22 -- common/autotest_common.sh@10 -- # set +x 00:12:48.918 ************************************ 00:12:48.918 END TEST nvmf_zcopy 00:12:48.918 ************************************ 00:12:48.918 16:24:22 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:48.918 16:24:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:48.918 16:24:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:48.918 16:24:22 -- common/autotest_common.sh@10 -- # set +x 00:12:48.918 ************************************ 00:12:48.918 START TEST nvmf_nmic 00:12:48.918 ************************************ 00:12:48.918 16:24:22 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:48.918 * Looking for test storage... 00:12:48.918 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:48.918 16:24:22 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:48.918 16:24:22 -- nvmf/common.sh@7 -- # uname -s 00:12:48.918 16:24:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:48.918 16:24:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:48.918 16:24:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:48.918 16:24:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:48.918 16:24:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:48.918 16:24:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:48.918 16:24:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:48.918 16:24:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:48.918 16:24:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:48.918 16:24:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:48.918 16:24:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:12:48.918 16:24:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:12:48.918 16:24:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:48.918 16:24:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:48.918 16:24:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:48.918 16:24:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:48.918 16:24:22 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:48.918 16:24:22 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:48.918 16:24:22 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:48.918 16:24:22 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:48.918 16:24:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.918 16:24:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.918 16:24:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.918 16:24:22 -- paths/export.sh@5 -- # export PATH 00:12:48.918 16:24:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.918 16:24:22 -- nvmf/common.sh@47 -- # : 0 00:12:48.918 16:24:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:48.918 16:24:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:48.918 16:24:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:48.918 16:24:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:48.918 16:24:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:48.918 16:24:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:48.918 16:24:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:48.918 16:24:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:48.918 16:24:22 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:48.918 16:24:22 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:48.918 16:24:22 -- target/nmic.sh@14 -- # nvmftestinit 00:12:48.918 16:24:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:48.918 16:24:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:48.918 16:24:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:48.918 16:24:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:48.918 16:24:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:48.918 16:24:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.918 16:24:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:48.918 16:24:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.918 16:24:22 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:12:48.918 16:24:22 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:12:48.918 16:24:22 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:12:48.918 16:24:22 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:12:48.918 16:24:22 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:12:48.918 16:24:22 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:12:48.918 16:24:22 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:48.918 16:24:22 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:48.918 16:24:22 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:48.918 16:24:22 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:48.918 16:24:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:48.918 16:24:22 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:48.918 16:24:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:48.918 16:24:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:48.918 16:24:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:48.918 16:24:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:48.918 16:24:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:48.918 16:24:22 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:48.918 16:24:22 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:48.918 16:24:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:48.918 Cannot find device "nvmf_tgt_br" 00:12:48.918 16:24:22 -- nvmf/common.sh@155 -- # true 00:12:48.918 16:24:22 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:48.918 Cannot find device "nvmf_tgt_br2" 00:12:48.918 16:24:22 -- nvmf/common.sh@156 -- # true 00:12:48.918 16:24:22 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:48.918 16:24:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:49.177 Cannot find device "nvmf_tgt_br" 00:12:49.177 16:24:22 -- nvmf/common.sh@158 -- # true 00:12:49.177 16:24:22 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:49.177 Cannot find device "nvmf_tgt_br2" 00:12:49.177 16:24:22 -- nvmf/common.sh@159 -- # true 00:12:49.177 16:24:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:49.177 16:24:23 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:49.177 16:24:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:49.177 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:49.177 16:24:23 -- nvmf/common.sh@162 -- # true 00:12:49.177 16:24:23 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:49.177 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:49.177 16:24:23 -- nvmf/common.sh@163 -- # true 00:12:49.177 16:24:23 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:49.177 16:24:23 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:49.177 16:24:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:49.177 16:24:23 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:49.177 16:24:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:49.177 16:24:23 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:49.177 16:24:23 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:49.177 16:24:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:49.177 16:24:23 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:49.177 16:24:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:49.177 16:24:23 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:49.177 16:24:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:49.177 16:24:23 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:49.177 16:24:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:49.177 16:24:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:49.177 16:24:23 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:49.177 16:24:23 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:49.177 16:24:23 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:49.177 16:24:23 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:49.177 16:24:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:49.177 16:24:23 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:49.436 16:24:23 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:49.436 16:24:23 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:49.436 16:24:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:49.436 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:49.436 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:12:49.436 00:12:49.436 --- 10.0.0.2 ping statistics --- 00:12:49.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.436 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:12:49.436 16:24:23 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:49.436 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:49.436 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:12:49.436 00:12:49.436 --- 10.0.0.3 ping statistics --- 00:12:49.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.436 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:12:49.436 16:24:23 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:49.436 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:49.436 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:12:49.436 00:12:49.436 --- 10.0.0.1 ping statistics --- 00:12:49.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.436 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:12:49.436 16:24:23 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:49.436 16:24:23 -- nvmf/common.sh@422 -- # return 0 00:12:49.436 16:24:23 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:49.436 16:24:23 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:49.436 16:24:23 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:49.436 16:24:23 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:49.436 16:24:23 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:49.436 16:24:23 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:49.436 16:24:23 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:49.436 16:24:23 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:12:49.436 16:24:23 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:49.436 16:24:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:49.436 16:24:23 -- common/autotest_common.sh@10 -- # set +x 00:12:49.436 16:24:23 -- nvmf/common.sh@470 -- # nvmfpid=75205 00:12:49.436 16:24:23 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:49.436 16:24:23 -- nvmf/common.sh@471 -- # waitforlisten 75205 00:12:49.436 16:24:23 -- common/autotest_common.sh@817 -- # '[' -z 75205 ']' 00:12:49.436 16:24:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.436 16:24:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:49.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.436 16:24:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.436 16:24:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:49.436 16:24:23 -- common/autotest_common.sh@10 -- # set +x 00:12:49.436 [2024-04-17 16:24:23.346288] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:12:49.436 [2024-04-17 16:24:23.346442] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:49.695 [2024-04-17 16:24:23.495762] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:49.695 [2024-04-17 16:24:23.634886] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:49.695 [2024-04-17 16:24:23.634976] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:49.695 [2024-04-17 16:24:23.634998] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:49.695 [2024-04-17 16:24:23.635008] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:49.695 [2024-04-17 16:24:23.635018] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:49.695 [2024-04-17 16:24:23.635414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:49.695 [2024-04-17 16:24:23.635592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:49.695 [2024-04-17 16:24:23.635697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:49.695 [2024-04-17 16:24:23.635705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.631 16:24:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:50.631 16:24:24 -- common/autotest_common.sh@850 -- # return 0 00:12:50.631 16:24:24 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:50.631 16:24:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:50.631 16:24:24 -- common/autotest_common.sh@10 -- # set +x 00:12:50.631 16:24:24 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:50.631 16:24:24 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:50.631 16:24:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:50.631 16:24:24 -- common/autotest_common.sh@10 -- # set +x 00:12:50.631 [2024-04-17 16:24:24.423551] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:50.631 16:24:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:50.631 16:24:24 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:50.631 16:24:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:50.631 16:24:24 -- common/autotest_common.sh@10 -- # set +x 00:12:50.631 Malloc0 00:12:50.631 16:24:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:50.631 16:24:24 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:50.631 16:24:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:50.631 16:24:24 -- common/autotest_common.sh@10 -- # set +x 00:12:50.631 16:24:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:50.631 16:24:24 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:50.631 16:24:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:50.631 16:24:24 -- common/autotest_common.sh@10 -- # set +x 00:12:50.631 16:24:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:50.631 16:24:24 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:50.631 16:24:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:50.631 16:24:24 -- common/autotest_common.sh@10 -- # set +x 00:12:50.631 [2024-04-17 16:24:24.487792] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:50.631 16:24:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:50.631 test case1: single bdev can't be used in multiple subsystems 00:12:50.631 16:24:24 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:12:50.631 16:24:24 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:50.631 16:24:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:50.631 16:24:24 -- common/autotest_common.sh@10 -- # set +x 00:12:50.631 16:24:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:50.631 16:24:24 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:50.631 16:24:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:50.631 16:24:24 -- common/autotest_common.sh@10 -- # set +x 00:12:50.631 16:24:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:50.631 16:24:24 -- target/nmic.sh@28 -- # nmic_status=0 00:12:50.631 16:24:24 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:12:50.631 16:24:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:50.631 16:24:24 -- common/autotest_common.sh@10 -- # set +x 00:12:50.631 [2024-04-17 16:24:24.511611] bdev.c:7988:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:12:50.631 [2024-04-17 16:24:24.511665] subsystem.c:1930:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:12:50.631 [2024-04-17 16:24:24.511677] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.631 2024/04/17 16:24:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:50.631 request: 00:12:50.631 { 00:12:50.631 "method": "nvmf_subsystem_add_ns", 00:12:50.631 "params": { 00:12:50.631 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:50.631 "namespace": { 00:12:50.631 "bdev_name": "Malloc0", 00:12:50.631 "no_auto_visible": false 00:12:50.631 } 00:12:50.631 } 00:12:50.631 } 00:12:50.631 Got JSON-RPC error response 00:12:50.631 GoRPCClient: error on JSON-RPC call 00:12:50.631 16:24:24 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:12:50.631 16:24:24 -- target/nmic.sh@29 -- # nmic_status=1 00:12:50.631 16:24:24 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:12:50.631 Adding namespace failed - expected result. 00:12:50.631 16:24:24 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:12:50.631 test case2: host connect to nvmf target in multiple paths 00:12:50.631 16:24:24 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:12:50.631 16:24:24 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:12:50.631 16:24:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:50.631 16:24:24 -- common/autotest_common.sh@10 -- # set +x 00:12:50.631 [2024-04-17 16:24:24.523798] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:12:50.631 16:24:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:50.631 16:24:24 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d --hostid=35bbb10f-fc38-42ac-b909-033700c5e05d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:50.892 16:24:24 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d --hostid=35bbb10f-fc38-42ac-b909-033700c5e05d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:12:50.892 16:24:24 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:12:50.892 16:24:24 -- common/autotest_common.sh@1184 -- # local i=0 00:12:50.892 16:24:24 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:50.892 16:24:24 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:50.892 16:24:24 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:53.424 16:24:26 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:53.424 16:24:26 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:53.424 16:24:26 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:53.424 16:24:26 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:53.424 16:24:26 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:53.424 16:24:26 -- common/autotest_common.sh@1194 -- # return 0 00:12:53.424 16:24:26 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:53.424 [global] 00:12:53.424 thread=1 00:12:53.424 invalidate=1 00:12:53.424 rw=write 00:12:53.424 time_based=1 00:12:53.424 runtime=1 00:12:53.424 ioengine=libaio 00:12:53.424 direct=1 00:12:53.424 bs=4096 00:12:53.424 iodepth=1 00:12:53.424 norandommap=0 00:12:53.424 numjobs=1 00:12:53.424 00:12:53.424 verify_dump=1 00:12:53.424 verify_backlog=512 00:12:53.424 verify_state_save=0 00:12:53.424 do_verify=1 00:12:53.424 verify=crc32c-intel 00:12:53.424 [job0] 00:12:53.424 filename=/dev/nvme0n1 00:12:53.424 Could not set queue depth (nvme0n1) 00:12:53.424 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:53.424 fio-3.35 00:12:53.424 Starting 1 thread 00:12:54.360 00:12:54.360 job0: (groupid=0, jobs=1): err= 0: pid=75314: Wed Apr 17 16:24:28 2024 00:12:54.360 read: IOPS=2439, BW=9758KiB/s (9992kB/s)(9768KiB/1001msec) 00:12:54.360 slat (usec): min=13, max=134, avg=20.09, stdev= 7.60 00:12:54.360 clat (usec): min=149, max=398, avg=206.88, stdev=30.93 00:12:54.360 lat (usec): min=166, max=420, avg=226.97, stdev=32.36 00:12:54.360 clat percentiles (usec): 00:12:54.360 | 1.00th=[ 157], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 180], 00:12:54.360 | 30.00th=[ 188], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 212], 00:12:54.360 | 70.00th=[ 221], 80.00th=[ 231], 90.00th=[ 245], 95.00th=[ 260], 00:12:54.360 | 99.00th=[ 310], 99.50th=[ 338], 99.90th=[ 359], 99.95th=[ 363], 00:12:54.360 | 99.99th=[ 400] 00:12:54.360 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:12:54.360 slat (usec): min=20, max=137, avg=28.71, stdev= 8.50 00:12:54.360 clat (usec): min=98, max=404, avg=141.59, stdev=27.72 00:12:54.360 lat (usec): min=121, max=541, avg=170.30, stdev=31.07 00:12:54.360 clat percentiles (usec): 00:12:54.360 | 1.00th=[ 104], 5.00th=[ 110], 10.00th=[ 114], 20.00th=[ 119], 00:12:54.360 | 30.00th=[ 124], 40.00th=[ 129], 50.00th=[ 137], 60.00th=[ 145], 00:12:54.360 | 70.00th=[ 153], 80.00th=[ 161], 90.00th=[ 176], 95.00th=[ 190], 00:12:54.360 | 99.00th=[ 235], 99.50th=[ 258], 99.90th=[ 297], 99.95th=[ 359], 00:12:54.360 | 99.99th=[ 404] 00:12:54.360 bw ( KiB/s): min=12024, max=12024, per=100.00%, avg=12024.00, stdev= 0.00, samples=1 00:12:54.360 iops : min= 3006, max= 3006, avg=3006.00, stdev= 0.00, samples=1 00:12:54.360 lat (usec) : 100=0.04%, 250=95.90%, 500=4.06% 00:12:54.360 cpu : usr=2.00%, sys=8.80%, ctx=5002, majf=0, minf=2 00:12:54.360 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:54.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:54.360 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:54.360 issued rwts: total=2442,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:54.360 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:54.360 00:12:54.360 Run status group 0 (all jobs): 00:12:54.360 READ: bw=9758KiB/s (9992kB/s), 9758KiB/s-9758KiB/s (9992kB/s-9992kB/s), io=9768KiB (10.0MB), run=1001-1001msec 00:12:54.360 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:12:54.360 00:12:54.360 Disk stats (read/write): 00:12:54.360 nvme0n1: ios=2098/2486, merge=0/0, ticks=459/393, in_queue=852, util=91.38% 00:12:54.360 16:24:28 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:54.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:54.360 16:24:28 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:54.360 16:24:28 -- common/autotest_common.sh@1205 -- # local i=0 00:12:54.360 16:24:28 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:12:54.360 16:24:28 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:54.360 16:24:28 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:54.360 16:24:28 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:12:54.360 16:24:28 -- common/autotest_common.sh@1217 -- # return 0 00:12:54.360 16:24:28 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:12:54.360 16:24:28 -- target/nmic.sh@53 -- # nvmftestfini 00:12:54.360 16:24:28 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:54.360 16:24:28 -- nvmf/common.sh@117 -- # sync 00:12:54.360 16:24:28 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:54.360 16:24:28 -- nvmf/common.sh@120 -- # set +e 00:12:54.360 16:24:28 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:54.360 16:24:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:54.360 rmmod nvme_tcp 00:12:54.360 rmmod nvme_fabrics 00:12:54.360 rmmod nvme_keyring 00:12:54.360 16:24:28 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:54.360 16:24:28 -- nvmf/common.sh@124 -- # set -e 00:12:54.360 16:24:28 -- nvmf/common.sh@125 -- # return 0 00:12:54.360 16:24:28 -- nvmf/common.sh@478 -- # '[' -n 75205 ']' 00:12:54.360 16:24:28 -- nvmf/common.sh@479 -- # killprocess 75205 00:12:54.360 16:24:28 -- common/autotest_common.sh@936 -- # '[' -z 75205 ']' 00:12:54.360 16:24:28 -- common/autotest_common.sh@940 -- # kill -0 75205 00:12:54.360 16:24:28 -- common/autotest_common.sh@941 -- # uname 00:12:54.360 16:24:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:54.360 16:24:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75205 00:12:54.360 16:24:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:54.360 16:24:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:54.360 killing process with pid 75205 00:12:54.360 16:24:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75205' 00:12:54.360 16:24:28 -- common/autotest_common.sh@955 -- # kill 75205 00:12:54.360 16:24:28 -- common/autotest_common.sh@960 -- # wait 75205 00:12:54.928 16:24:28 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:54.928 16:24:28 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:54.928 16:24:28 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:54.928 16:24:28 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:54.928 16:24:28 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:54.928 16:24:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.928 16:24:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:54.928 16:24:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.928 16:24:28 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:54.928 00:12:54.928 real 0m5.954s 00:12:54.928 user 0m19.645s 00:12:54.928 sys 0m1.439s 00:12:54.928 16:24:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:54.928 16:24:28 -- common/autotest_common.sh@10 -- # set +x 00:12:54.928 ************************************ 00:12:54.928 END TEST nvmf_nmic 00:12:54.928 ************************************ 00:12:54.928 16:24:28 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:54.928 16:24:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:54.928 16:24:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:54.928 16:24:28 -- common/autotest_common.sh@10 -- # set +x 00:12:54.928 ************************************ 00:12:54.928 START TEST nvmf_fio_target 00:12:54.928 ************************************ 00:12:54.928 16:24:28 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:54.928 * Looking for test storage... 00:12:54.928 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:54.928 16:24:28 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:54.928 16:24:28 -- nvmf/common.sh@7 -- # uname -s 00:12:54.928 16:24:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:54.928 16:24:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:54.928 16:24:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:54.928 16:24:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:54.928 16:24:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:54.928 16:24:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:54.928 16:24:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:54.928 16:24:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:54.928 16:24:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:54.928 16:24:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:55.187 16:24:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:12:55.187 16:24:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:12:55.187 16:24:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:55.187 16:24:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:55.187 16:24:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:55.187 16:24:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:55.187 16:24:28 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:55.187 16:24:28 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:55.187 16:24:28 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:55.187 16:24:28 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:55.187 16:24:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.187 16:24:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.187 16:24:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.187 16:24:28 -- paths/export.sh@5 -- # export PATH 00:12:55.187 16:24:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.187 16:24:28 -- nvmf/common.sh@47 -- # : 0 00:12:55.187 16:24:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:55.187 16:24:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:55.187 16:24:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:55.187 16:24:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:55.187 16:24:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:55.187 16:24:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:55.187 16:24:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:55.187 16:24:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:55.187 16:24:28 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:55.187 16:24:28 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:55.187 16:24:28 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:55.187 16:24:28 -- target/fio.sh@16 -- # nvmftestinit 00:12:55.187 16:24:28 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:55.187 16:24:28 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:55.187 16:24:28 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:55.187 16:24:28 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:55.187 16:24:28 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:55.187 16:24:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.187 16:24:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:55.187 16:24:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.187 16:24:28 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:12:55.188 16:24:28 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:12:55.188 16:24:28 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:12:55.188 16:24:28 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:12:55.188 16:24:28 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:12:55.188 16:24:28 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:12:55.188 16:24:28 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:55.188 16:24:28 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:55.188 16:24:28 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:55.188 16:24:28 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:55.188 16:24:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:55.188 16:24:28 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:55.188 16:24:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:55.188 16:24:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:55.188 16:24:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:55.188 16:24:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:55.188 16:24:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:55.188 16:24:28 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:55.188 16:24:28 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:55.188 16:24:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:55.188 Cannot find device "nvmf_tgt_br" 00:12:55.188 16:24:29 -- nvmf/common.sh@155 -- # true 00:12:55.188 16:24:29 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:55.188 Cannot find device "nvmf_tgt_br2" 00:12:55.188 16:24:29 -- nvmf/common.sh@156 -- # true 00:12:55.188 16:24:29 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:55.188 16:24:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:55.188 Cannot find device "nvmf_tgt_br" 00:12:55.188 16:24:29 -- nvmf/common.sh@158 -- # true 00:12:55.188 16:24:29 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:55.188 Cannot find device "nvmf_tgt_br2" 00:12:55.188 16:24:29 -- nvmf/common.sh@159 -- # true 00:12:55.188 16:24:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:55.188 16:24:29 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:55.188 16:24:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:55.188 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:55.188 16:24:29 -- nvmf/common.sh@162 -- # true 00:12:55.188 16:24:29 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:55.188 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:55.188 16:24:29 -- nvmf/common.sh@163 -- # true 00:12:55.188 16:24:29 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:55.188 16:24:29 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:55.188 16:24:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:55.188 16:24:29 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:55.188 16:24:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:55.188 16:24:29 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:55.188 16:24:29 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:55.188 16:24:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:55.188 16:24:29 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:55.188 16:24:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:55.188 16:24:29 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:55.188 16:24:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:55.188 16:24:29 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:55.188 16:24:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:55.447 16:24:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:55.447 16:24:29 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:55.447 16:24:29 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:55.447 16:24:29 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:55.447 16:24:29 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:55.447 16:24:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:55.447 16:24:29 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:55.447 16:24:29 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:55.447 16:24:29 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:55.447 16:24:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:55.447 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:55.447 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:12:55.447 00:12:55.447 --- 10.0.0.2 ping statistics --- 00:12:55.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.447 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:12:55.447 16:24:29 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:55.447 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:55.447 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:12:55.447 00:12:55.447 --- 10.0.0.3 ping statistics --- 00:12:55.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.447 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:12:55.447 16:24:29 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:55.447 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:55.447 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:12:55.447 00:12:55.447 --- 10.0.0.1 ping statistics --- 00:12:55.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.447 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:12:55.447 16:24:29 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:55.447 16:24:29 -- nvmf/common.sh@422 -- # return 0 00:12:55.447 16:24:29 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:55.447 16:24:29 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:55.447 16:24:29 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:55.447 16:24:29 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:55.447 16:24:29 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:55.447 16:24:29 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:55.447 16:24:29 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:55.448 16:24:29 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:12:55.448 16:24:29 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:55.448 16:24:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:55.448 16:24:29 -- common/autotest_common.sh@10 -- # set +x 00:12:55.448 16:24:29 -- nvmf/common.sh@470 -- # nvmfpid=75498 00:12:55.448 16:24:29 -- nvmf/common.sh@471 -- # waitforlisten 75498 00:12:55.448 16:24:29 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:55.448 16:24:29 -- common/autotest_common.sh@817 -- # '[' -z 75498 ']' 00:12:55.448 16:24:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.448 16:24:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:55.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.448 16:24:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.448 16:24:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:55.448 16:24:29 -- common/autotest_common.sh@10 -- # set +x 00:12:55.448 [2024-04-17 16:24:29.421681] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:12:55.448 [2024-04-17 16:24:29.422000] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:55.707 [2024-04-17 16:24:29.565682] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:55.707 [2024-04-17 16:24:29.700080] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:55.707 [2024-04-17 16:24:29.700142] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:55.707 [2024-04-17 16:24:29.700168] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:55.707 [2024-04-17 16:24:29.700178] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:55.707 [2024-04-17 16:24:29.700197] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:55.707 [2024-04-17 16:24:29.700293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:55.707 [2024-04-17 16:24:29.700410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:55.707 [2024-04-17 16:24:29.701302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:55.707 [2024-04-17 16:24:29.701323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.642 16:24:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:56.642 16:24:30 -- common/autotest_common.sh@850 -- # return 0 00:12:56.642 16:24:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:56.642 16:24:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:56.642 16:24:30 -- common/autotest_common.sh@10 -- # set +x 00:12:56.642 16:24:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:56.642 16:24:30 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:56.901 [2024-04-17 16:24:30.706827] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:56.901 16:24:30 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:57.160 16:24:31 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:12:57.160 16:24:31 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:57.418 16:24:31 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:12:57.418 16:24:31 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:57.729 16:24:31 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:12:57.729 16:24:31 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:58.311 16:24:32 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:12:58.311 16:24:32 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:12:58.311 16:24:32 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:58.880 16:24:32 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:12:58.880 16:24:32 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:59.140 16:24:32 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:12:59.140 16:24:32 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:59.400 16:24:33 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:12:59.400 16:24:33 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:12:59.660 16:24:33 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:59.919 16:24:33 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:59.919 16:24:33 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:00.177 16:24:34 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:00.177 16:24:34 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:00.436 16:24:34 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:00.694 [2024-04-17 16:24:34.618983] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.695 16:24:34 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:00.953 16:24:34 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:01.522 16:24:35 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d --hostid=35bbb10f-fc38-42ac-b909-033700c5e05d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:01.522 16:24:35 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:01.522 16:24:35 -- common/autotest_common.sh@1184 -- # local i=0 00:13:01.522 16:24:35 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:01.522 16:24:35 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:13:01.522 16:24:35 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:13:01.522 16:24:35 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:03.429 16:24:37 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:03.429 16:24:37 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:03.429 16:24:37 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:03.689 16:24:37 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:13:03.689 16:24:37 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:03.689 16:24:37 -- common/autotest_common.sh@1194 -- # return 0 00:13:03.689 16:24:37 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:03.689 [global] 00:13:03.689 thread=1 00:13:03.689 invalidate=1 00:13:03.689 rw=write 00:13:03.689 time_based=1 00:13:03.689 runtime=1 00:13:03.689 ioengine=libaio 00:13:03.689 direct=1 00:13:03.689 bs=4096 00:13:03.689 iodepth=1 00:13:03.689 norandommap=0 00:13:03.689 numjobs=1 00:13:03.689 00:13:03.689 verify_dump=1 00:13:03.689 verify_backlog=512 00:13:03.689 verify_state_save=0 00:13:03.689 do_verify=1 00:13:03.689 verify=crc32c-intel 00:13:03.689 [job0] 00:13:03.689 filename=/dev/nvme0n1 00:13:03.689 [job1] 00:13:03.689 filename=/dev/nvme0n2 00:13:03.689 [job2] 00:13:03.689 filename=/dev/nvme0n3 00:13:03.689 [job3] 00:13:03.689 filename=/dev/nvme0n4 00:13:03.689 Could not set queue depth (nvme0n1) 00:13:03.689 Could not set queue depth (nvme0n2) 00:13:03.689 Could not set queue depth (nvme0n3) 00:13:03.689 Could not set queue depth (nvme0n4) 00:13:03.689 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:03.689 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:03.689 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:03.690 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:03.690 fio-3.35 00:13:03.690 Starting 4 threads 00:13:05.082 00:13:05.082 job0: (groupid=0, jobs=1): err= 0: pid=75807: Wed Apr 17 16:24:38 2024 00:13:05.082 read: IOPS=1053, BW=4216KiB/s (4317kB/s)(4220KiB/1001msec) 00:13:05.082 slat (nsec): min=17837, max=79261, avg=28639.93, stdev=8838.89 00:13:05.082 clat (usec): min=181, max=2235, avg=411.41, stdev=92.94 00:13:05.082 lat (usec): min=223, max=2259, avg=440.05, stdev=94.33 00:13:05.082 clat percentiles (usec): 00:13:05.082 | 1.00th=[ 318], 5.00th=[ 338], 10.00th=[ 347], 20.00th=[ 359], 00:13:05.082 | 30.00th=[ 375], 40.00th=[ 383], 50.00th=[ 396], 60.00th=[ 408], 00:13:05.082 | 70.00th=[ 424], 80.00th=[ 445], 90.00th=[ 486], 95.00th=[ 537], 00:13:05.082 | 99.00th=[ 652], 99.50th=[ 668], 99.90th=[ 1450], 99.95th=[ 2245], 00:13:05.082 | 99.99th=[ 2245] 00:13:05.082 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:13:05.082 slat (usec): min=23, max=103, avg=41.55, stdev= 9.33 00:13:05.082 clat (usec): min=134, max=861, avg=301.75, stdev=57.32 00:13:05.082 lat (usec): min=181, max=899, avg=343.31, stdev=55.22 00:13:05.082 clat percentiles (usec): 00:13:05.082 | 1.00th=[ 212], 5.00th=[ 235], 10.00th=[ 247], 20.00th=[ 260], 00:13:05.082 | 30.00th=[ 269], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 302], 00:13:05.082 | 70.00th=[ 318], 80.00th=[ 338], 90.00th=[ 371], 95.00th=[ 412], 00:13:05.082 | 99.00th=[ 498], 99.50th=[ 545], 99.90th=[ 644], 99.95th=[ 865], 00:13:05.082 | 99.99th=[ 865] 00:13:05.082 bw ( KiB/s): min= 6448, max= 6448, per=23.38%, avg=6448.00, stdev= 0.00, samples=1 00:13:05.082 iops : min= 1612, max= 1612, avg=1612.00, stdev= 0.00, samples=1 00:13:05.082 lat (usec) : 250=6.91%, 500=89.15%, 750=3.71%, 1000=0.15% 00:13:05.082 lat (msec) : 2=0.04%, 4=0.04% 00:13:05.082 cpu : usr=2.20%, sys=6.80%, ctx=2610, majf=0, minf=15 00:13:05.082 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:05.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:05.082 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:05.082 issued rwts: total=1055,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:05.082 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:05.082 job1: (groupid=0, jobs=1): err= 0: pid=75808: Wed Apr 17 16:24:38 2024 00:13:05.082 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:13:05.082 slat (nsec): min=13240, max=65215, avg=18570.05, stdev=5070.20 00:13:05.082 clat (usec): min=152, max=8111, avg=245.69, stdev=204.47 00:13:05.082 lat (usec): min=173, max=8128, avg=264.26, stdev=205.69 00:13:05.082 clat percentiles (usec): 00:13:05.082 | 1.00th=[ 165], 5.00th=[ 174], 10.00th=[ 182], 20.00th=[ 192], 00:13:05.082 | 30.00th=[ 202], 40.00th=[ 210], 50.00th=[ 221], 60.00th=[ 227], 00:13:05.082 | 70.00th=[ 239], 80.00th=[ 253], 90.00th=[ 281], 95.00th=[ 326], 00:13:05.082 | 99.00th=[ 734], 99.50th=[ 799], 99.90th=[ 947], 99.95th=[ 1713], 00:13:05.082 | 99.99th=[ 8094] 00:13:05.082 write: IOPS=2130, BW=8523KiB/s (8728kB/s)(8532KiB/1001msec); 0 zone resets 00:13:05.082 slat (nsec): min=19235, max=94434, avg=25775.15, stdev=7264.55 00:13:05.082 clat (usec): min=108, max=642, avg=185.27, stdev=67.28 00:13:05.082 lat (usec): min=129, max=677, avg=211.05, stdev=70.79 00:13:05.082 clat percentiles (usec): 00:13:05.082 | 1.00th=[ 119], 5.00th=[ 128], 10.00th=[ 135], 20.00th=[ 143], 00:13:05.082 | 30.00th=[ 151], 40.00th=[ 161], 50.00th=[ 172], 60.00th=[ 182], 00:13:05.082 | 70.00th=[ 192], 80.00th=[ 210], 90.00th=[ 237], 95.00th=[ 277], 00:13:05.082 | 99.00th=[ 490], 99.50th=[ 523], 99.90th=[ 644], 99.95th=[ 644], 00:13:05.082 | 99.99th=[ 644] 00:13:05.082 bw ( KiB/s): min= 9624, max= 9624, per=34.90%, avg=9624.00, stdev= 0.00, samples=1 00:13:05.082 iops : min= 2406, max= 2406, avg=2406.00, stdev= 0.00, samples=1 00:13:05.082 lat (usec) : 250=85.58%, 500=11.96%, 750=2.06%, 1000=0.36% 00:13:05.082 lat (msec) : 2=0.02%, 10=0.02% 00:13:05.082 cpu : usr=2.00%, sys=6.80%, ctx=4183, majf=0, minf=9 00:13:05.082 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:05.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:05.082 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:05.082 issued rwts: total=2048,2133,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:05.082 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:05.082 job2: (groupid=0, jobs=1): err= 0: pid=75809: Wed Apr 17 16:24:38 2024 00:13:05.082 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:13:05.082 slat (nsec): min=17238, max=85545, avg=24972.59, stdev=7246.76 00:13:05.082 clat (usec): min=217, max=622, avg=304.70, stdev=35.38 00:13:05.082 lat (usec): min=236, max=690, avg=329.68, stdev=37.51 00:13:05.082 clat percentiles (usec): 00:13:05.082 | 1.00th=[ 245], 5.00th=[ 260], 10.00th=[ 265], 20.00th=[ 277], 00:13:05.082 | 30.00th=[ 285], 40.00th=[ 289], 50.00th=[ 297], 60.00th=[ 310], 00:13:05.083 | 70.00th=[ 322], 80.00th=[ 334], 90.00th=[ 355], 95.00th=[ 367], 00:13:05.083 | 99.00th=[ 400], 99.50th=[ 412], 99.90th=[ 461], 99.95th=[ 627], 00:13:05.083 | 99.99th=[ 627] 00:13:05.083 write: IOPS=1694, BW=6777KiB/s (6940kB/s)(6784KiB/1001msec); 0 zone resets 00:13:05.083 slat (usec): min=24, max=143, avg=39.13, stdev=10.53 00:13:05.083 clat (usec): min=155, max=463, avg=246.47, stdev=33.09 00:13:05.083 lat (usec): min=180, max=557, avg=285.60, stdev=35.71 00:13:05.083 clat percentiles (usec): 00:13:05.083 | 1.00th=[ 184], 5.00th=[ 198], 10.00th=[ 208], 20.00th=[ 221], 00:13:05.083 | 30.00th=[ 229], 40.00th=[ 237], 50.00th=[ 245], 60.00th=[ 251], 00:13:05.083 | 70.00th=[ 262], 80.00th=[ 273], 90.00th=[ 289], 95.00th=[ 302], 00:13:05.083 | 99.00th=[ 347], 99.50th=[ 367], 99.90th=[ 453], 99.95th=[ 465], 00:13:05.083 | 99.99th=[ 465] 00:13:05.083 bw ( KiB/s): min= 8208, max= 8208, per=29.76%, avg=8208.00, stdev= 0.00, samples=1 00:13:05.083 iops : min= 2052, max= 2052, avg=2052.00, stdev= 0.00, samples=1 00:13:05.083 lat (usec) : 250=31.44%, 500=68.53%, 750=0.03% 00:13:05.083 cpu : usr=2.20%, sys=7.50%, ctx=3232, majf=0, minf=6 00:13:05.083 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:05.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:05.083 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:05.083 issued rwts: total=1536,1696,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:05.083 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:05.083 job3: (groupid=0, jobs=1): err= 0: pid=75810: Wed Apr 17 16:24:38 2024 00:13:05.083 read: IOPS=1122, BW=4492KiB/s (4599kB/s)(4496KiB/1001msec) 00:13:05.083 slat (nsec): min=14112, max=83257, avg=32003.93, stdev=10775.87 00:13:05.083 clat (usec): min=210, max=2116, avg=385.64, stdev=77.51 00:13:05.083 lat (usec): min=227, max=2146, avg=417.65, stdev=78.88 00:13:05.083 clat percentiles (usec): 00:13:05.083 | 1.00th=[ 247], 5.00th=[ 302], 10.00th=[ 322], 20.00th=[ 343], 00:13:05.083 | 30.00th=[ 355], 40.00th=[ 371], 50.00th=[ 379], 60.00th=[ 396], 00:13:05.083 | 70.00th=[ 412], 80.00th=[ 429], 90.00th=[ 449], 95.00th=[ 465], 00:13:05.083 | 99.00th=[ 506], 99.50th=[ 529], 99.90th=[ 1237], 99.95th=[ 2114], 00:13:05.083 | 99.99th=[ 2114] 00:13:05.083 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:13:05.083 slat (usec): min=22, max=131, avg=43.13, stdev= 9.69 00:13:05.083 clat (usec): min=140, max=2485, avg=296.40, stdev=86.01 00:13:05.083 lat (usec): min=175, max=2560, avg=339.53, stdev=85.83 00:13:05.083 clat percentiles (usec): 00:13:05.083 | 1.00th=[ 178], 5.00th=[ 215], 10.00th=[ 229], 20.00th=[ 249], 00:13:05.083 | 30.00th=[ 262], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 293], 00:13:05.083 | 70.00th=[ 310], 80.00th=[ 347], 90.00th=[ 379], 95.00th=[ 412], 00:13:05.083 | 99.00th=[ 498], 99.50th=[ 594], 99.90th=[ 881], 99.95th=[ 2474], 00:13:05.083 | 99.99th=[ 2474] 00:13:05.083 bw ( KiB/s): min= 5464, max= 6824, per=22.28%, avg=6144.00, stdev=961.67, samples=2 00:13:05.083 iops : min= 1366, max= 1706, avg=1536.00, stdev=240.42, samples=2 00:13:05.083 lat (usec) : 250=12.74%, 500=86.17%, 750=0.94%, 1000=0.04% 00:13:05.083 lat (msec) : 2=0.04%, 4=0.08% 00:13:05.083 cpu : usr=2.00%, sys=7.70%, ctx=2662, majf=0, minf=5 00:13:05.083 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:05.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:05.083 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:05.083 issued rwts: total=1124,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:05.083 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:05.083 00:13:05.083 Run status group 0 (all jobs): 00:13:05.083 READ: bw=22.5MiB/s (23.6MB/s), 4216KiB/s-8184KiB/s (4317kB/s-8380kB/s), io=22.5MiB (23.6MB), run=1001-1001msec 00:13:05.083 WRITE: bw=26.9MiB/s (28.2MB/s), 6138KiB/s-8523KiB/s (6285kB/s-8728kB/s), io=27.0MiB (28.3MB), run=1001-1001msec 00:13:05.083 00:13:05.083 Disk stats (read/write): 00:13:05.083 nvme0n1: ios=1074/1207, merge=0/0, ticks=502/373, in_queue=875, util=92.48% 00:13:05.083 nvme0n2: ios=1893/2048, merge=0/0, ticks=443/379, in_queue=822, util=88.55% 00:13:05.083 nvme0n3: ios=1266/1536, merge=0/0, ticks=484/406, in_queue=890, util=93.17% 00:13:05.083 nvme0n4: ios=1024/1275, merge=0/0, ticks=392/397, in_queue=789, util=89.68% 00:13:05.083 16:24:38 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:05.083 [global] 00:13:05.083 thread=1 00:13:05.083 invalidate=1 00:13:05.083 rw=randwrite 00:13:05.083 time_based=1 00:13:05.083 runtime=1 00:13:05.083 ioengine=libaio 00:13:05.083 direct=1 00:13:05.083 bs=4096 00:13:05.083 iodepth=1 00:13:05.083 norandommap=0 00:13:05.083 numjobs=1 00:13:05.083 00:13:05.083 verify_dump=1 00:13:05.083 verify_backlog=512 00:13:05.083 verify_state_save=0 00:13:05.083 do_verify=1 00:13:05.083 verify=crc32c-intel 00:13:05.083 [job0] 00:13:05.083 filename=/dev/nvme0n1 00:13:05.083 [job1] 00:13:05.083 filename=/dev/nvme0n2 00:13:05.083 [job2] 00:13:05.083 filename=/dev/nvme0n3 00:13:05.083 [job3] 00:13:05.083 filename=/dev/nvme0n4 00:13:05.083 Could not set queue depth (nvme0n1) 00:13:05.083 Could not set queue depth (nvme0n2) 00:13:05.083 Could not set queue depth (nvme0n3) 00:13:05.083 Could not set queue depth (nvme0n4) 00:13:05.083 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:05.083 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:05.083 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:05.083 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:05.083 fio-3.35 00:13:05.083 Starting 4 threads 00:13:06.460 00:13:06.460 job0: (groupid=0, jobs=1): err= 0: pid=75863: Wed Apr 17 16:24:40 2024 00:13:06.460 read: IOPS=1609, BW=6438KiB/s (6592kB/s)(6444KiB/1001msec) 00:13:06.460 slat (nsec): min=14805, max=88786, avg=22328.45, stdev=6291.51 00:13:06.460 clat (usec): min=225, max=864, avg=277.34, stdev=33.47 00:13:06.460 lat (usec): min=248, max=912, avg=299.67, stdev=34.36 00:13:06.460 clat percentiles (usec): 00:13:06.460 | 1.00th=[ 235], 5.00th=[ 245], 10.00th=[ 249], 20.00th=[ 258], 00:13:06.460 | 30.00th=[ 262], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 277], 00:13:06.460 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 314], 95.00th=[ 330], 00:13:06.460 | 99.00th=[ 367], 99.50th=[ 383], 99.90th=[ 652], 99.95th=[ 865], 00:13:06.460 | 99.99th=[ 865] 00:13:06.460 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:13:06.460 slat (usec): min=21, max=131, avg=31.99, stdev= 9.80 00:13:06.460 clat (usec): min=159, max=473, avg=216.77, stdev=26.05 00:13:06.460 lat (usec): min=187, max=566, avg=248.75, stdev=28.65 00:13:06.460 clat percentiles (usec): 00:13:06.460 | 1.00th=[ 176], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 196], 00:13:06.460 | 30.00th=[ 202], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 219], 00:13:06.460 | 70.00th=[ 225], 80.00th=[ 235], 90.00th=[ 251], 95.00th=[ 265], 00:13:06.460 | 99.00th=[ 297], 99.50th=[ 310], 99.90th=[ 367], 99.95th=[ 392], 00:13:06.460 | 99.99th=[ 474] 00:13:06.460 bw ( KiB/s): min= 8192, max= 8192, per=29.37%, avg=8192.00, stdev= 0.00, samples=1 00:13:06.460 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:06.460 lat (usec) : 250=54.77%, 500=45.12%, 750=0.08%, 1000=0.03% 00:13:06.460 cpu : usr=1.70%, sys=7.20%, ctx=3659, majf=0, minf=17 00:13:06.460 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:06.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.460 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.460 issued rwts: total=1611,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:06.460 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:06.460 job1: (groupid=0, jobs=1): err= 0: pid=75864: Wed Apr 17 16:24:40 2024 00:13:06.460 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:13:06.460 slat (nsec): min=11360, max=65976, avg=19332.30, stdev=7144.28 00:13:06.460 clat (usec): min=182, max=40582, avg=470.38, stdev=1256.81 00:13:06.460 lat (usec): min=195, max=40593, avg=489.71, stdev=1256.71 00:13:06.460 clat percentiles (usec): 00:13:06.460 | 1.00th=[ 310], 5.00th=[ 334], 10.00th=[ 351], 20.00th=[ 371], 00:13:06.460 | 30.00th=[ 392], 40.00th=[ 404], 50.00th=[ 424], 60.00th=[ 441], 00:13:06.460 | 70.00th=[ 461], 80.00th=[ 486], 90.00th=[ 529], 95.00th=[ 562], 00:13:06.460 | 99.00th=[ 660], 99.50th=[ 676], 99.90th=[ 725], 99.95th=[40633], 00:13:06.460 | 99.99th=[40633] 00:13:06.460 write: IOPS=1439, BW=5758KiB/s (5896kB/s)(5764KiB/1001msec); 0 zone resets 00:13:06.460 slat (usec): min=10, max=139, avg=26.54, stdev= 8.61 00:13:06.460 clat (usec): min=124, max=617, avg=315.57, stdev=66.97 00:13:06.460 lat (usec): min=147, max=639, avg=342.11, stdev=66.63 00:13:06.460 clat percentiles (usec): 00:13:06.460 | 1.00th=[ 151], 5.00th=[ 219], 10.00th=[ 247], 20.00th=[ 265], 00:13:06.460 | 30.00th=[ 281], 40.00th=[ 297], 50.00th=[ 310], 60.00th=[ 322], 00:13:06.460 | 70.00th=[ 338], 80.00th=[ 363], 90.00th=[ 404], 95.00th=[ 437], 00:13:06.460 | 99.00th=[ 494], 99.50th=[ 537], 99.90th=[ 594], 99.95th=[ 619], 00:13:06.460 | 99.99th=[ 619] 00:13:06.460 bw ( KiB/s): min= 4648, max= 4648, per=16.66%, avg=4648.00, stdev= 0.00, samples=1 00:13:06.460 iops : min= 1162, max= 1162, avg=1162.00, stdev= 0.00, samples=1 00:13:06.460 lat (usec) : 250=6.65%, 500=86.53%, 750=6.77% 00:13:06.460 lat (msec) : 50=0.04% 00:13:06.460 cpu : usr=1.20%, sys=4.50%, ctx=2467, majf=0, minf=8 00:13:06.460 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:06.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.460 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.460 issued rwts: total=1024,1441,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:06.460 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:06.460 job2: (groupid=0, jobs=1): err= 0: pid=75865: Wed Apr 17 16:24:40 2024 00:13:06.460 read: IOPS=1895, BW=7580KiB/s (7762kB/s)(7588KiB/1001msec) 00:13:06.460 slat (nsec): min=12652, max=84642, avg=22891.36, stdev=8837.10 00:13:06.460 clat (usec): min=166, max=3501, avg=254.93, stdev=109.73 00:13:06.460 lat (usec): min=180, max=3532, avg=277.82, stdev=112.40 00:13:06.460 clat percentiles (usec): 00:13:06.460 | 1.00th=[ 176], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 210], 00:13:06.460 | 30.00th=[ 225], 40.00th=[ 235], 50.00th=[ 247], 60.00th=[ 260], 00:13:06.460 | 70.00th=[ 273], 80.00th=[ 289], 90.00th=[ 310], 95.00th=[ 330], 00:13:06.460 | 99.00th=[ 371], 99.50th=[ 498], 99.90th=[ 2769], 99.95th=[ 3490], 00:13:06.460 | 99.99th=[ 3490] 00:13:06.460 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:13:06.460 slat (usec): min=18, max=125, avg=29.16, stdev= 9.46 00:13:06.460 clat (usec): min=123, max=515, avg=197.19, stdev=37.56 00:13:06.460 lat (usec): min=143, max=563, avg=226.35, stdev=40.50 00:13:06.460 clat percentiles (usec): 00:13:06.460 | 1.00th=[ 135], 5.00th=[ 147], 10.00th=[ 153], 20.00th=[ 165], 00:13:06.460 | 30.00th=[ 176], 40.00th=[ 186], 50.00th=[ 194], 60.00th=[ 202], 00:13:06.460 | 70.00th=[ 212], 80.00th=[ 225], 90.00th=[ 243], 95.00th=[ 258], 00:13:06.460 | 99.00th=[ 314], 99.50th=[ 343], 99.90th=[ 424], 99.95th=[ 424], 00:13:06.460 | 99.99th=[ 515] 00:13:06.460 bw ( KiB/s): min= 8192, max= 8192, per=29.37%, avg=8192.00, stdev= 0.00, samples=1 00:13:06.460 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:06.460 lat (usec) : 250=73.54%, 500=26.21%, 750=0.15%, 1000=0.03% 00:13:06.460 lat (msec) : 2=0.03%, 4=0.05% 00:13:06.460 cpu : usr=1.80%, sys=7.90%, ctx=3950, majf=0, minf=3 00:13:06.460 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:06.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.460 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.460 issued rwts: total=1897,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:06.460 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:06.460 job3: (groupid=0, jobs=1): err= 0: pid=75866: Wed Apr 17 16:24:40 2024 00:13:06.460 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:13:06.460 slat (nsec): min=11666, max=86155, avg=20106.64, stdev=6673.30 00:13:06.460 clat (usec): min=235, max=40501, avg=468.95, stdev=1254.41 00:13:06.460 lat (usec): min=250, max=40521, avg=489.05, stdev=1254.54 00:13:06.460 clat percentiles (usec): 00:13:06.460 | 1.00th=[ 310], 5.00th=[ 330], 10.00th=[ 347], 20.00th=[ 367], 00:13:06.460 | 30.00th=[ 383], 40.00th=[ 404], 50.00th=[ 424], 60.00th=[ 441], 00:13:06.460 | 70.00th=[ 461], 80.00th=[ 486], 90.00th=[ 529], 95.00th=[ 562], 00:13:06.460 | 99.00th=[ 660], 99.50th=[ 701], 99.90th=[ 816], 99.95th=[40633], 00:13:06.461 | 99.99th=[40633] 00:13:06.461 write: IOPS=1441, BW=5766KiB/s (5905kB/s)(5772KiB/1001msec); 0 zone resets 00:13:06.461 slat (usec): min=10, max=337, avg=26.64, stdev=11.39 00:13:06.461 clat (usec): min=3, max=636, avg=315.43, stdev=63.67 00:13:06.461 lat (usec): min=153, max=660, avg=342.07, stdev=62.45 00:13:06.461 clat percentiles (usec): 00:13:06.461 | 1.00th=[ 163], 5.00th=[ 227], 10.00th=[ 249], 20.00th=[ 269], 00:13:06.461 | 30.00th=[ 285], 40.00th=[ 297], 50.00th=[ 310], 60.00th=[ 326], 00:13:06.461 | 70.00th=[ 338], 80.00th=[ 359], 90.00th=[ 396], 95.00th=[ 433], 00:13:06.461 | 99.00th=[ 506], 99.50th=[ 529], 99.90th=[ 603], 99.95th=[ 635], 00:13:06.461 | 99.99th=[ 635] 00:13:06.461 bw ( KiB/s): min= 4648, max= 4648, per=16.66%, avg=4648.00, stdev= 0.00, samples=1 00:13:06.461 iops : min= 1162, max= 1162, avg=1162.00, stdev= 0.00, samples=1 00:13:06.461 lat (usec) : 4=0.04%, 250=6.53%, 500=86.46%, 750=6.89%, 1000=0.04% 00:13:06.461 lat (msec) : 50=0.04% 00:13:06.461 cpu : usr=1.10%, sys=4.60%, ctx=2470, majf=0, minf=17 00:13:06.461 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:06.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.461 issued rwts: total=1024,1443,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:06.461 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:06.461 00:13:06.461 Run status group 0 (all jobs): 00:13:06.461 READ: bw=21.7MiB/s (22.7MB/s), 4092KiB/s-7580KiB/s (4190kB/s-7762kB/s), io=21.7MiB (22.8MB), run=1001-1001msec 00:13:06.461 WRITE: bw=27.2MiB/s (28.6MB/s), 5758KiB/s-8184KiB/s (5896kB/s-8380kB/s), io=27.3MiB (28.6MB), run=1001-1001msec 00:13:06.461 00:13:06.461 Disk stats (read/write): 00:13:06.461 nvme0n1: ios=1586/1629, merge=0/0, ticks=488/375, in_queue=863, util=90.27% 00:13:06.461 nvme0n2: ios=1073/1048, merge=0/0, ticks=532/345, in_queue=877, util=91.41% 00:13:06.461 nvme0n3: ios=1577/1856, merge=0/0, ticks=469/391, in_queue=860, util=91.18% 00:13:06.461 nvme0n4: ios=1024/1049, merge=0/0, ticks=482/346, in_queue=828, util=89.90% 00:13:06.461 16:24:40 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:06.461 [global] 00:13:06.461 thread=1 00:13:06.461 invalidate=1 00:13:06.461 rw=write 00:13:06.461 time_based=1 00:13:06.461 runtime=1 00:13:06.461 ioengine=libaio 00:13:06.461 direct=1 00:13:06.461 bs=4096 00:13:06.461 iodepth=128 00:13:06.461 norandommap=0 00:13:06.461 numjobs=1 00:13:06.461 00:13:06.461 verify_dump=1 00:13:06.461 verify_backlog=512 00:13:06.461 verify_state_save=0 00:13:06.461 do_verify=1 00:13:06.461 verify=crc32c-intel 00:13:06.461 [job0] 00:13:06.461 filename=/dev/nvme0n1 00:13:06.461 [job1] 00:13:06.461 filename=/dev/nvme0n2 00:13:06.461 [job2] 00:13:06.461 filename=/dev/nvme0n3 00:13:06.461 [job3] 00:13:06.461 filename=/dev/nvme0n4 00:13:06.461 Could not set queue depth (nvme0n1) 00:13:06.461 Could not set queue depth (nvme0n2) 00:13:06.461 Could not set queue depth (nvme0n3) 00:13:06.461 Could not set queue depth (nvme0n4) 00:13:06.461 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:06.461 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:06.461 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:06.461 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:06.461 fio-3.35 00:13:06.461 Starting 4 threads 00:13:07.838 00:13:07.838 job0: (groupid=0, jobs=1): err= 0: pid=75920: Wed Apr 17 16:24:41 2024 00:13:07.838 read: IOPS=2455, BW=9821KiB/s (10.1MB/s)(9860KiB/1004msec) 00:13:07.838 slat (usec): min=8, max=8301, avg=199.48, stdev=1004.00 00:13:07.838 clat (usec): min=432, max=30986, avg=25493.96, stdev=3245.30 00:13:07.838 lat (usec): min=5297, max=31003, avg=25693.44, stdev=3095.37 00:13:07.838 clat percentiles (usec): 00:13:07.838 | 1.00th=[ 5669], 5.00th=[20055], 10.00th=[25035], 20.00th=[25560], 00:13:07.838 | 30.00th=[25560], 40.00th=[25822], 50.00th=[26084], 60.00th=[26346], 00:13:07.838 | 70.00th=[26346], 80.00th=[26608], 90.00th=[27132], 95.00th=[28181], 00:13:07.838 | 99.00th=[29492], 99.50th=[29754], 99.90th=[31065], 99.95th=[31065], 00:13:07.838 | 99.99th=[31065] 00:13:07.838 write: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec); 0 zone resets 00:13:07.838 slat (usec): min=19, max=6955, avg=189.80, stdev=900.19 00:13:07.838 clat (usec): min=18388, max=27240, avg=24748.67, stdev=1107.77 00:13:07.838 lat (usec): min=19485, max=27288, avg=24938.47, stdev=647.60 00:13:07.838 clat percentiles (usec): 00:13:07.838 | 1.00th=[19268], 5.00th=[23987], 10.00th=[23987], 20.00th=[24249], 00:13:07.838 | 30.00th=[24511], 40.00th=[24773], 50.00th=[24773], 60.00th=[25035], 00:13:07.838 | 70.00th=[25035], 80.00th=[25297], 90.00th=[25822], 95.00th=[26084], 00:13:07.838 | 99.00th=[26870], 99.50th=[26870], 99.90th=[27132], 99.95th=[27132], 00:13:07.838 | 99.99th=[27132] 00:13:07.838 bw ( KiB/s): min= 9464, max=11016, per=16.85%, avg=10240.00, stdev=1097.43, samples=2 00:13:07.838 iops : min= 2366, max= 2754, avg=2560.00, stdev=274.36, samples=2 00:13:07.838 lat (usec) : 500=0.02% 00:13:07.838 lat (msec) : 10=0.74%, 20=2.75%, 50=96.50% 00:13:07.838 cpu : usr=3.39%, sys=7.68%, ctx=162, majf=0, minf=11 00:13:07.838 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:13:07.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:07.838 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:07.838 issued rwts: total=2465,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:07.838 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:07.838 job1: (groupid=0, jobs=1): err= 0: pid=75921: Wed Apr 17 16:24:41 2024 00:13:07.838 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:13:07.838 slat (usec): min=6, max=3857, avg=92.03, stdev=460.03 00:13:07.838 clat (usec): min=8407, max=15875, avg=12061.90, stdev=1104.74 00:13:07.838 lat (usec): min=8424, max=17055, avg=12153.93, stdev=1141.35 00:13:07.838 clat percentiles (usec): 00:13:07.838 | 1.00th=[ 9241], 5.00th=[10028], 10.00th=[10421], 20.00th=[11469], 00:13:07.838 | 30.00th=[11731], 40.00th=[11863], 50.00th=[11994], 60.00th=[12256], 00:13:07.838 | 70.00th=[12518], 80.00th=[12911], 90.00th=[13435], 95.00th=[13829], 00:13:07.838 | 99.00th=[15008], 99.50th=[15139], 99.90th=[15664], 99.95th=[15795], 00:13:07.838 | 99.99th=[15926] 00:13:07.838 write: IOPS=5533, BW=21.6MiB/s (22.7MB/s)(21.6MiB/1001msec); 0 zone resets 00:13:07.838 slat (usec): min=9, max=3504, avg=87.67, stdev=360.53 00:13:07.838 clat (usec): min=331, max=15872, avg=11688.81, stdev=1468.21 00:13:07.838 lat (usec): min=3263, max=15904, avg=11776.48, stdev=1448.44 00:13:07.838 clat percentiles (usec): 00:13:07.838 | 1.00th=[ 7570], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[11076], 00:13:07.838 | 30.00th=[11731], 40.00th=[11863], 50.00th=[11994], 60.00th=[12125], 00:13:07.838 | 70.00th=[12387], 80.00th=[12518], 90.00th=[12911], 95.00th=[13435], 00:13:07.838 | 99.00th=[14615], 99.50th=[15008], 99.90th=[15401], 99.95th=[15533], 00:13:07.838 | 99.99th=[15926] 00:13:07.838 bw ( KiB/s): min=20816, max=20816, per=34.26%, avg=20816.00, stdev= 0.00, samples=1 00:13:07.838 iops : min= 5204, max= 5204, avg=5204.00, stdev= 0.00, samples=1 00:13:07.838 lat (usec) : 500=0.01% 00:13:07.838 lat (msec) : 4=0.27%, 10=10.05%, 20=89.67% 00:13:07.838 cpu : usr=4.10%, sys=15.40%, ctx=619, majf=0, minf=8 00:13:07.838 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:07.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:07.838 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:07.838 issued rwts: total=5120,5539,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:07.838 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:07.838 job2: (groupid=0, jobs=1): err= 0: pid=75922: Wed Apr 17 16:24:41 2024 00:13:07.838 read: IOPS=4524, BW=17.7MiB/s (18.5MB/s)(17.7MiB/1002msec) 00:13:07.838 slat (usec): min=9, max=3417, avg=108.46, stdev=498.50 00:13:07.838 clat (usec): min=299, max=16996, avg=13881.56, stdev=1382.90 00:13:07.838 lat (usec): min=3137, max=18196, avg=13990.02, stdev=1310.03 00:13:07.838 clat percentiles (usec): 00:13:07.838 | 1.00th=[ 6980], 5.00th=[11469], 10.00th=[12780], 20.00th=[13829], 00:13:07.838 | 30.00th=[13960], 40.00th=[14091], 50.00th=[14091], 60.00th=[14222], 00:13:07.838 | 70.00th=[14353], 80.00th=[14484], 90.00th=[14746], 95.00th=[14877], 00:13:07.838 | 99.00th=[16188], 99.50th=[16450], 99.90th=[16909], 99.95th=[16909], 00:13:07.838 | 99.99th=[16909] 00:13:07.838 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:13:07.838 slat (usec): min=12, max=3241, avg=101.33, stdev=363.22 00:13:07.838 clat (usec): min=10716, max=17170, avg=13784.54, stdev=1210.38 00:13:07.838 lat (usec): min=10744, max=17190, avg=13885.87, stdev=1195.75 00:13:07.838 clat percentiles (usec): 00:13:07.838 | 1.00th=[11338], 5.00th=[11731], 10.00th=[11863], 20.00th=[12518], 00:13:07.838 | 30.00th=[13304], 40.00th=[13698], 50.00th=[13960], 60.00th=[14222], 00:13:07.838 | 70.00th=[14484], 80.00th=[14746], 90.00th=[15270], 95.00th=[15664], 00:13:07.838 | 99.00th=[16319], 99.50th=[16581], 99.90th=[17171], 99.95th=[17171], 00:13:07.838 | 99.99th=[17171] 00:13:07.838 bw ( KiB/s): min=17792, max=19072, per=30.33%, avg=18432.00, stdev=905.10, samples=2 00:13:07.838 iops : min= 4448, max= 4768, avg=4608.00, stdev=226.27, samples=2 00:13:07.838 lat (usec) : 500=0.01% 00:13:07.838 lat (msec) : 4=0.35%, 10=0.35%, 20=99.29% 00:13:07.838 cpu : usr=6.49%, sys=11.99%, ctx=616, majf=0, minf=9 00:13:07.839 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:07.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:07.839 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:07.839 issued rwts: total=4534,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:07.839 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:07.839 job3: (groupid=0, jobs=1): err= 0: pid=75923: Wed Apr 17 16:24:41 2024 00:13:07.839 read: IOPS=2484, BW=9938KiB/s (10.2MB/s)(9988KiB/1005msec) 00:13:07.839 slat (usec): min=6, max=6539, avg=202.03, stdev=883.32 00:13:07.839 clat (usec): min=549, max=34503, avg=25061.82, stdev=3383.43 00:13:07.839 lat (usec): min=5455, max=34521, avg=25263.85, stdev=3290.06 00:13:07.839 clat percentiles (usec): 00:13:07.839 | 1.00th=[ 6063], 5.00th=[20317], 10.00th=[22152], 20.00th=[23200], 00:13:07.839 | 30.00th=[25297], 40.00th=[25560], 50.00th=[25822], 60.00th=[26084], 00:13:07.839 | 70.00th=[26346], 80.00th=[26608], 90.00th=[27132], 95.00th=[28443], 00:13:07.839 | 99.00th=[30540], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:13:07.839 | 99.99th=[34341] 00:13:07.839 write: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec); 0 zone resets 00:13:07.839 slat (usec): min=12, max=6257, avg=185.00, stdev=866.99 00:13:07.839 clat (usec): min=16235, max=33681, avg=25003.23, stdev=2132.64 00:13:07.839 lat (usec): min=17216, max=33710, avg=25188.22, stdev=1968.34 00:13:07.839 clat percentiles (usec): 00:13:07.839 | 1.00th=[19006], 5.00th=[21627], 10.00th=[22414], 20.00th=[24249], 00:13:07.839 | 30.00th=[24511], 40.00th=[24773], 50.00th=[24773], 60.00th=[25035], 00:13:07.839 | 70.00th=[25297], 80.00th=[25822], 90.00th=[28181], 95.00th=[28443], 00:13:07.839 | 99.00th=[32637], 99.50th=[33817], 99.90th=[33817], 99.95th=[33817], 00:13:07.839 | 99.99th=[33817] 00:13:07.839 bw ( KiB/s): min= 9264, max=11216, per=16.85%, avg=10240.00, stdev=1380.27, samples=2 00:13:07.839 iops : min= 2316, max= 2804, avg=2560.00, stdev=345.07, samples=2 00:13:07.839 lat (usec) : 750=0.02% 00:13:07.839 lat (msec) : 10=0.63%, 20=3.08%, 50=96.26% 00:13:07.839 cpu : usr=2.89%, sys=8.57%, ctx=231, majf=0, minf=7 00:13:07.839 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:13:07.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:07.839 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:07.839 issued rwts: total=2497,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:07.839 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:07.839 00:13:07.839 Run status group 0 (all jobs): 00:13:07.839 READ: bw=56.8MiB/s (59.6MB/s), 9821KiB/s-20.0MiB/s (10.1MB/s-20.9MB/s), io=57.1MiB (59.9MB), run=1001-1005msec 00:13:07.839 WRITE: bw=59.3MiB/s (62.2MB/s), 9.95MiB/s-21.6MiB/s (10.4MB/s-22.7MB/s), io=59.6MiB (62.5MB), run=1001-1005msec 00:13:07.839 00:13:07.839 Disk stats (read/write): 00:13:07.839 nvme0n1: ios=2098/2304, merge=0/0, ticks=12589/12792, in_queue=25381, util=89.98% 00:13:07.839 nvme0n2: ios=4654/4623, merge=0/0, ticks=16912/15614, in_queue=32526, util=89.81% 00:13:07.839 nvme0n3: ios=3838/4096, merge=0/0, ticks=12569/12604, in_queue=25173, util=90.46% 00:13:07.839 nvme0n4: ios=2048/2350, merge=0/0, ticks=12986/12637, in_queue=25623, util=89.90% 00:13:07.839 16:24:41 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:07.839 [global] 00:13:07.839 thread=1 00:13:07.839 invalidate=1 00:13:07.839 rw=randwrite 00:13:07.839 time_based=1 00:13:07.839 runtime=1 00:13:07.839 ioengine=libaio 00:13:07.839 direct=1 00:13:07.839 bs=4096 00:13:07.839 iodepth=128 00:13:07.839 norandommap=0 00:13:07.839 numjobs=1 00:13:07.839 00:13:07.839 verify_dump=1 00:13:07.839 verify_backlog=512 00:13:07.839 verify_state_save=0 00:13:07.839 do_verify=1 00:13:07.839 verify=crc32c-intel 00:13:07.839 [job0] 00:13:07.839 filename=/dev/nvme0n1 00:13:07.839 [job1] 00:13:07.839 filename=/dev/nvme0n2 00:13:07.839 [job2] 00:13:07.839 filename=/dev/nvme0n3 00:13:07.839 [job3] 00:13:07.839 filename=/dev/nvme0n4 00:13:07.839 Could not set queue depth (nvme0n1) 00:13:07.839 Could not set queue depth (nvme0n2) 00:13:07.839 Could not set queue depth (nvme0n3) 00:13:07.839 Could not set queue depth (nvme0n4) 00:13:07.839 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:07.839 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:07.839 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:07.839 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:07.839 fio-3.35 00:13:07.839 Starting 4 threads 00:13:09.216 00:13:09.216 job0: (groupid=0, jobs=1): err= 0: pid=75982: Wed Apr 17 16:24:42 2024 00:13:09.216 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:13:09.216 slat (usec): min=8, max=6182, avg=95.67, stdev=499.97 00:13:09.216 clat (usec): min=7387, max=19611, avg=12082.21, stdev=1712.44 00:13:09.216 lat (usec): min=7411, max=19749, avg=12177.87, stdev=1764.46 00:13:09.216 clat percentiles (usec): 00:13:09.216 | 1.00th=[ 8094], 5.00th=[ 9372], 10.00th=[10421], 20.00th=[11076], 00:13:09.216 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11731], 60.00th=[11863], 00:13:09.216 | 70.00th=[12780], 80.00th=[13698], 90.00th=[14091], 95.00th=[15008], 00:13:09.216 | 99.00th=[17171], 99.50th=[17957], 99.90th=[19268], 99.95th=[19530], 00:13:09.216 | 99.99th=[19530] 00:13:09.216 write: IOPS=5477, BW=21.4MiB/s (22.4MB/s)(21.4MiB/1001msec); 0 zone resets 00:13:09.216 slat (usec): min=10, max=5484, avg=85.22, stdev=360.00 00:13:09.216 clat (usec): min=484, max=19584, avg=11797.25, stdev=1773.35 00:13:09.216 lat (usec): min=4767, max=19628, avg=11882.48, stdev=1803.12 00:13:09.216 clat percentiles (usec): 00:13:09.216 | 1.00th=[ 5866], 5.00th=[ 9110], 10.00th=[10159], 20.00th=[10814], 00:13:09.216 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11731], 60.00th=[11863], 00:13:09.216 | 70.00th=[12125], 80.00th=[13173], 90.00th=[13960], 95.00th=[14484], 00:13:09.216 | 99.00th=[16909], 99.50th=[17957], 99.90th=[19530], 99.95th=[19530], 00:13:09.216 | 99.99th=[19530] 00:13:09.216 bw ( KiB/s): min=20521, max=22368, per=33.79%, avg=21444.50, stdev=1306.03, samples=2 00:13:09.216 iops : min= 5130, max= 5592, avg=5361.00, stdev=326.68, samples=2 00:13:09.216 lat (usec) : 500=0.01% 00:13:09.217 lat (msec) : 10=8.14%, 20=91.85% 00:13:09.217 cpu : usr=4.00%, sys=15.40%, ctx=642, majf=0, minf=8 00:13:09.217 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:09.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:09.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:09.217 issued rwts: total=5120,5483,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:09.217 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:09.217 job1: (groupid=0, jobs=1): err= 0: pid=75983: Wed Apr 17 16:24:42 2024 00:13:09.217 read: IOPS=5159, BW=20.2MiB/s (21.1MB/s)(20.3MiB/1009msec) 00:13:09.217 slat (usec): min=5, max=11923, avg=96.43, stdev=635.47 00:13:09.217 clat (usec): min=4745, max=22700, avg=12587.22, stdev=2852.77 00:13:09.217 lat (usec): min=4767, max=22715, avg=12683.64, stdev=2886.08 00:13:09.217 clat percentiles (usec): 00:13:09.217 | 1.00th=[ 7308], 5.00th=[ 9241], 10.00th=[ 9896], 20.00th=[10552], 00:13:09.217 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11863], 60.00th=[12125], 00:13:09.217 | 70.00th=[13304], 80.00th=[14353], 90.00th=[16319], 95.00th=[19006], 00:13:09.217 | 99.00th=[21627], 99.50th=[22152], 99.90th=[22676], 99.95th=[22676], 00:13:09.217 | 99.99th=[22676] 00:13:09.217 write: IOPS=5581, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1009msec); 0 zone resets 00:13:09.217 slat (usec): min=5, max=8978, avg=80.82, stdev=445.80 00:13:09.217 clat (usec): min=3768, max=23522, avg=11071.08, stdev=2251.06 00:13:09.217 lat (usec): min=3795, max=23558, avg=11151.90, stdev=2298.05 00:13:09.217 clat percentiles (usec): 00:13:09.217 | 1.00th=[ 5014], 5.00th=[ 5997], 10.00th=[ 7308], 20.00th=[10028], 00:13:09.217 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11600], 60.00th=[12125], 00:13:09.217 | 70.00th=[12387], 80.00th=[12518], 90.00th=[12649], 95.00th=[12911], 00:13:09.217 | 99.00th=[17695], 99.50th=[19792], 99.90th=[22414], 99.95th=[22676], 00:13:09.217 | 99.99th=[23462] 00:13:09.217 bw ( KiB/s): min=22016, max=22749, per=35.26%, avg=22382.50, stdev=518.31, samples=2 00:13:09.217 iops : min= 5504, max= 5687, avg=5595.50, stdev=129.40, samples=2 00:13:09.217 lat (msec) : 4=0.04%, 10=16.75%, 20=81.44%, 50=1.77% 00:13:09.217 cpu : usr=6.65%, sys=11.81%, ctx=699, majf=0, minf=7 00:13:09.217 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:13:09.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:09.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:09.217 issued rwts: total=5206,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:09.217 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:09.217 job2: (groupid=0, jobs=1): err= 0: pid=75984: Wed Apr 17 16:24:42 2024 00:13:09.217 read: IOPS=2522, BW=9.85MiB/s (10.3MB/s)(10.0MiB/1015msec) 00:13:09.217 slat (usec): min=3, max=15416, avg=161.43, stdev=980.09 00:13:09.217 clat (usec): min=7113, max=46680, avg=18827.29, stdev=7006.65 00:13:09.217 lat (usec): min=7127, max=46697, avg=18988.72, stdev=7067.85 00:13:09.217 clat percentiles (usec): 00:13:09.217 | 1.00th=[ 9503], 5.00th=[12780], 10.00th=[13566], 20.00th=[14353], 00:13:09.217 | 30.00th=[14877], 40.00th=[15664], 50.00th=[16319], 60.00th=[17433], 00:13:09.217 | 70.00th=[19006], 80.00th=[20317], 90.00th=[29754], 95.00th=[36439], 00:13:09.217 | 99.00th=[42206], 99.50th=[42730], 99.90th=[46400], 99.95th=[46924], 00:13:09.217 | 99.99th=[46924] 00:13:09.217 write: IOPS=2866, BW=11.2MiB/s (11.7MB/s)(11.4MiB/1015msec); 0 zone resets 00:13:09.217 slat (usec): min=5, max=12748, avg=193.35, stdev=818.88 00:13:09.217 clat (usec): min=5956, max=46803, avg=27632.64, stdev=10322.34 00:13:09.217 lat (usec): min=5979, max=46814, avg=27825.99, stdev=10395.78 00:13:09.217 clat percentiles (usec): 00:13:09.217 | 1.00th=[ 6783], 5.00th=[11338], 10.00th=[13960], 20.00th=[15664], 00:13:09.217 | 30.00th=[19006], 40.00th=[26870], 50.00th=[29492], 60.00th=[31589], 00:13:09.217 | 70.00th=[34341], 80.00th=[36963], 90.00th=[40109], 95.00th=[43254], 00:13:09.217 | 99.00th=[46400], 99.50th=[46924], 99.90th=[46924], 99.95th=[46924], 00:13:09.217 | 99.99th=[46924] 00:13:09.217 bw ( KiB/s): min= 9976, max=12312, per=17.56%, avg=11144.00, stdev=1651.80, samples=2 00:13:09.217 iops : min= 2494, max= 3078, avg=2786.00, stdev=412.95, samples=2 00:13:09.217 lat (msec) : 10=2.47%, 20=51.65%, 50=45.89% 00:13:09.217 cpu : usr=2.96%, sys=7.20%, ctx=432, majf=0, minf=9 00:13:09.217 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:13:09.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:09.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:09.217 issued rwts: total=2560,2910,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:09.217 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:09.217 job3: (groupid=0, jobs=1): err= 0: pid=75985: Wed Apr 17 16:24:42 2024 00:13:09.217 read: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec) 00:13:09.217 slat (usec): min=6, max=30431, avg=260.37, stdev=1709.55 00:13:09.217 clat (usec): min=12353, max=73063, avg=35383.93, stdev=17256.23 00:13:09.217 lat (usec): min=12369, max=73079, avg=35644.30, stdev=17270.47 00:13:09.217 clat percentiles (usec): 00:13:09.217 | 1.00th=[12911], 5.00th=[21103], 10.00th=[22938], 20.00th=[23462], 00:13:09.217 | 30.00th=[24249], 40.00th=[24773], 50.00th=[26084], 60.00th=[27395], 00:13:09.217 | 70.00th=[38011], 80.00th=[54789], 90.00th=[69731], 95.00th=[71828], 00:13:09.217 | 99.00th=[72877], 99.50th=[72877], 99.90th=[72877], 99.95th=[72877], 00:13:09.217 | 99.99th=[72877] 00:13:09.217 write: IOPS=2072, BW=8291KiB/s (8490kB/s)(8324KiB/1004msec); 0 zone resets 00:13:09.217 slat (usec): min=12, max=27654, avg=215.37, stdev=1305.43 00:13:09.217 clat (usec): min=1892, max=76549, avg=25250.54, stdev=13228.37 00:13:09.217 lat (usec): min=5811, max=76577, avg=25465.91, stdev=13288.19 00:13:09.217 clat percentiles (usec): 00:13:09.217 | 1.00th=[ 6456], 5.00th=[17433], 10.00th=[18220], 20.00th=[19006], 00:13:09.217 | 30.00th=[19530], 40.00th=[19792], 50.00th=[20317], 60.00th=[20579], 00:13:09.217 | 70.00th=[21890], 80.00th=[24249], 90.00th=[51119], 95.00th=[55837], 00:13:09.217 | 99.00th=[76022], 99.50th=[76022], 99.90th=[76022], 99.95th=[76022], 00:13:09.217 | 99.99th=[77071] 00:13:09.217 bw ( KiB/s): min= 8192, max= 8192, per=12.91%, avg=8192.00, stdev= 0.00, samples=2 00:13:09.217 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:13:09.217 lat (msec) : 2=0.02%, 10=0.78%, 20=23.66%, 50=58.17%, 100=17.36% 00:13:09.217 cpu : usr=1.79%, sys=6.88%, ctx=135, majf=0, minf=13 00:13:09.217 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:13:09.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:09.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:09.217 issued rwts: total=2048,2081,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:09.217 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:09.217 00:13:09.217 Run status group 0 (all jobs): 00:13:09.217 READ: bw=57.5MiB/s (60.3MB/s), 8159KiB/s-20.2MiB/s (8355kB/s-21.1MB/s), io=58.3MiB (61.2MB), run=1001-1015msec 00:13:09.217 WRITE: bw=62.0MiB/s (65.0MB/s), 8291KiB/s-21.8MiB/s (8490kB/s-22.9MB/s), io=62.9MiB (66.0MB), run=1001-1015msec 00:13:09.217 00:13:09.217 Disk stats (read/write): 00:13:09.217 nvme0n1: ios=4614/4608, merge=0/0, ticks=25873/24279, in_queue=50152, util=90.38% 00:13:09.217 nvme0n2: ios=4657/4775, merge=0/0, ticks=53601/49426, in_queue=103027, util=90.31% 00:13:09.217 nvme0n3: ios=2085/2520, merge=0/0, ticks=36440/68703, in_queue=105143, util=90.89% 00:13:09.217 nvme0n4: ios=1664/2048, merge=0/0, ticks=13597/12265, in_queue=25862, util=89.92% 00:13:09.217 16:24:42 -- target/fio.sh@55 -- # sync 00:13:09.217 16:24:42 -- target/fio.sh@59 -- # fio_pid=75999 00:13:09.217 16:24:42 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:09.217 16:24:42 -- target/fio.sh@61 -- # sleep 3 00:13:09.217 [global] 00:13:09.217 thread=1 00:13:09.217 invalidate=1 00:13:09.217 rw=read 00:13:09.217 time_based=1 00:13:09.217 runtime=10 00:13:09.217 ioengine=libaio 00:13:09.217 direct=1 00:13:09.217 bs=4096 00:13:09.217 iodepth=1 00:13:09.217 norandommap=1 00:13:09.217 numjobs=1 00:13:09.217 00:13:09.217 [job0] 00:13:09.217 filename=/dev/nvme0n1 00:13:09.217 [job1] 00:13:09.217 filename=/dev/nvme0n2 00:13:09.217 [job2] 00:13:09.217 filename=/dev/nvme0n3 00:13:09.217 [job3] 00:13:09.217 filename=/dev/nvme0n4 00:13:09.217 Could not set queue depth (nvme0n1) 00:13:09.217 Could not set queue depth (nvme0n2) 00:13:09.217 Could not set queue depth (nvme0n3) 00:13:09.217 Could not set queue depth (nvme0n4) 00:13:09.217 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:09.217 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:09.217 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:09.217 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:09.217 fio-3.35 00:13:09.217 Starting 4 threads 00:13:12.501 16:24:45 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:12.501 fio: pid=76047, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:12.501 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=31121408, buflen=4096 00:13:12.501 16:24:46 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:12.501 fio: pid=76046, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:12.501 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=34676736, buflen=4096 00:13:12.501 16:24:46 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:12.501 16:24:46 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:12.760 fio: pid=76044, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:12.760 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=42491904, buflen=4096 00:13:13.019 16:24:46 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:13.019 16:24:46 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:13.278 fio: pid=76045, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:13.278 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=48824320, buflen=4096 00:13:13.278 00:13:13.278 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=76044: Wed Apr 17 16:24:47 2024 00:13:13.278 read: IOPS=2983, BW=11.7MiB/s (12.2MB/s)(40.5MiB/3478msec) 00:13:13.278 slat (usec): min=8, max=13834, avg=26.80, stdev=190.44 00:13:13.278 clat (usec): min=141, max=4146, avg=306.08, stdev=77.90 00:13:13.278 lat (usec): min=158, max=14056, avg=332.88, stdev=205.01 00:13:13.278 clat percentiles (usec): 00:13:13.278 | 1.00th=[ 180], 5.00th=[ 251], 10.00th=[ 262], 20.00th=[ 273], 00:13:13.278 | 30.00th=[ 281], 40.00th=[ 289], 50.00th=[ 293], 60.00th=[ 302], 00:13:13.278 | 70.00th=[ 314], 80.00th=[ 338], 90.00th=[ 371], 95.00th=[ 396], 00:13:13.278 | 99.00th=[ 457], 99.50th=[ 498], 99.90th=[ 963], 99.95th=[ 1647], 00:13:13.278 | 99.99th=[ 2835] 00:13:13.278 bw ( KiB/s): min=10072, max=12896, per=29.53%, avg=11922.67, stdev=1280.24, samples=6 00:13:13.278 iops : min= 2518, max= 3224, avg=2980.67, stdev=320.06, samples=6 00:13:13.278 lat (usec) : 250=5.02%, 500=94.48%, 750=0.33%, 1000=0.07% 00:13:13.278 lat (msec) : 2=0.07%, 4=0.02%, 10=0.01% 00:13:13.278 cpu : usr=1.47%, sys=5.55%, ctx=10390, majf=0, minf=1 00:13:13.278 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:13.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.278 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.278 issued rwts: total=10375,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:13.278 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:13.278 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=76045: Wed Apr 17 16:24:47 2024 00:13:13.278 read: IOPS=3136, BW=12.2MiB/s (12.8MB/s)(46.6MiB/3801msec) 00:13:13.278 slat (usec): min=9, max=15746, avg=21.72, stdev=223.67 00:13:13.278 clat (usec): min=134, max=4051, avg=295.59, stdev=97.58 00:13:13.278 lat (usec): min=147, max=15951, avg=317.30, stdev=244.18 00:13:13.278 clat percentiles (usec): 00:13:13.278 | 1.00th=[ 149], 5.00th=[ 163], 10.00th=[ 180], 20.00th=[ 269], 00:13:13.278 | 30.00th=[ 281], 40.00th=[ 289], 50.00th=[ 297], 60.00th=[ 302], 00:13:13.278 | 70.00th=[ 314], 80.00th=[ 338], 90.00th=[ 375], 95.00th=[ 400], 00:13:13.278 | 99.00th=[ 461], 99.50th=[ 523], 99.90th=[ 1020], 99.95th=[ 1876], 00:13:13.278 | 99.99th=[ 3687] 00:13:13.278 bw ( KiB/s): min= 9920, max=13222, per=29.95%, avg=12088.86, stdev=1314.19, samples=7 00:13:13.278 iops : min= 2480, max= 3305, avg=3022.14, stdev=328.47, samples=7 00:13:13.278 lat (usec) : 250=15.75%, 500=83.62%, 750=0.47%, 1000=0.05% 00:13:13.278 lat (msec) : 2=0.06%, 4=0.03%, 10=0.01% 00:13:13.278 cpu : usr=0.76%, sys=4.53%, ctx=11948, majf=0, minf=1 00:13:13.278 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:13.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.278 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.278 issued rwts: total=11921,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:13.278 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:13.278 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=76046: Wed Apr 17 16:24:47 2024 00:13:13.278 read: IOPS=2685, BW=10.5MiB/s (11.0MB/s)(33.1MiB/3153msec) 00:13:13.278 slat (usec): min=12, max=13479, avg=26.10, stdev=194.06 00:13:13.278 clat (usec): min=163, max=3524, avg=343.96, stdev=114.29 00:13:13.278 lat (usec): min=177, max=13790, avg=370.07, stdev=229.42 00:13:13.278 clat percentiles (usec): 00:13:13.278 | 1.00th=[ 198], 5.00th=[ 277], 10.00th=[ 285], 20.00th=[ 293], 00:13:13.278 | 30.00th=[ 297], 40.00th=[ 302], 50.00th=[ 306], 60.00th=[ 310], 00:13:13.278 | 70.00th=[ 318], 80.00th=[ 334], 90.00th=[ 519], 95.00th=[ 603], 00:13:13.278 | 99.00th=[ 693], 99.50th=[ 717], 99.90th=[ 848], 99.95th=[ 1418], 00:13:13.278 | 99.99th=[ 3523] 00:13:13.278 bw ( KiB/s): min= 6488, max=12608, per=26.45%, avg=10676.00, stdev=2810.06, samples=6 00:13:13.278 iops : min= 1622, max= 3152, avg=2669.00, stdev=702.52, samples=6 00:13:13.278 lat (usec) : 250=2.22%, 500=86.04%, 750=11.44%, 1000=0.21% 00:13:13.278 lat (msec) : 2=0.05%, 4=0.02% 00:13:13.278 cpu : usr=1.30%, sys=4.82%, ctx=8469, majf=0, minf=1 00:13:13.278 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:13.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.278 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.278 issued rwts: total=8467,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:13.278 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:13.278 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=76047: Wed Apr 17 16:24:47 2024 00:13:13.278 read: IOPS=2609, BW=10.2MiB/s (10.7MB/s)(29.7MiB/2912msec) 00:13:13.278 slat (usec): min=14, max=128, avg=25.24, stdev= 9.23 00:13:13.278 clat (usec): min=161, max=2405, avg=354.45, stdev=126.41 00:13:13.278 lat (usec): min=177, max=2425, avg=379.69, stdev=132.05 00:13:13.278 clat percentiles (usec): 00:13:13.278 | 1.00th=[ 265], 5.00th=[ 273], 10.00th=[ 281], 20.00th=[ 289], 00:13:13.278 | 30.00th=[ 293], 40.00th=[ 297], 50.00th=[ 302], 60.00th=[ 310], 00:13:13.278 | 70.00th=[ 318], 80.00th=[ 363], 90.00th=[ 553], 95.00th=[ 652], 00:13:13.278 | 99.00th=[ 775], 99.50th=[ 840], 99.90th=[ 996], 99.95th=[ 1057], 00:13:13.278 | 99.99th=[ 2409] 00:13:13.278 bw ( KiB/s): min= 6160, max=12408, per=27.36%, avg=11046.40, stdev=2733.67, samples=5 00:13:13.278 iops : min= 1540, max= 3102, avg=2761.60, stdev=683.42, samples=5 00:13:13.278 lat (usec) : 250=0.50%, 500=83.95%, 750=14.28%, 1000=1.17% 00:13:13.278 lat (msec) : 2=0.08%, 4=0.01% 00:13:13.278 cpu : usr=1.34%, sys=5.67%, ctx=7601, majf=0, minf=1 00:13:13.278 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:13.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.278 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.278 issued rwts: total=7599,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:13.278 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:13.279 00:13:13.279 Run status group 0 (all jobs): 00:13:13.279 READ: bw=39.4MiB/s (41.3MB/s), 10.2MiB/s-12.2MiB/s (10.7MB/s-12.8MB/s), io=150MiB (157MB), run=2912-3801msec 00:13:13.279 00:13:13.279 Disk stats (read/write): 00:13:13.279 nvme0n1: ios=10012/0, merge=0/0, ticks=3114/0, in_queue=3114, util=95.39% 00:13:13.279 nvme0n2: ios=10948/0, merge=0/0, ticks=3410/0, in_queue=3410, util=95.34% 00:13:13.279 nvme0n3: ios=8361/0, merge=0/0, ticks=2915/0, in_queue=2915, util=96.05% 00:13:13.279 nvme0n4: ios=7535/0, merge=0/0, ticks=2676/0, in_queue=2676, util=96.76% 00:13:13.279 16:24:47 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:13.279 16:24:47 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:13.538 16:24:47 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:13.538 16:24:47 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:13.798 16:24:47 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:13.798 16:24:47 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:14.372 16:24:48 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:14.373 16:24:48 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:14.373 16:24:48 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:14.373 16:24:48 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:14.940 16:24:48 -- target/fio.sh@69 -- # fio_status=0 00:13:14.940 16:24:48 -- target/fio.sh@70 -- # wait 75999 00:13:14.940 16:24:48 -- target/fio.sh@70 -- # fio_status=4 00:13:14.940 16:24:48 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:14.940 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.940 16:24:48 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:14.940 16:24:48 -- common/autotest_common.sh@1205 -- # local i=0 00:13:14.940 16:24:48 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:14.940 16:24:48 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:14.940 16:24:48 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:14.940 16:24:48 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:14.940 nvmf hotplug test: fio failed as expected 00:13:14.940 16:24:48 -- common/autotest_common.sh@1217 -- # return 0 00:13:14.940 16:24:48 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:14.940 16:24:48 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:14.940 16:24:48 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:15.199 16:24:49 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:15.199 16:24:49 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:15.199 16:24:49 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:15.199 16:24:49 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:15.199 16:24:49 -- target/fio.sh@91 -- # nvmftestfini 00:13:15.199 16:24:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:15.199 16:24:49 -- nvmf/common.sh@117 -- # sync 00:13:15.199 16:24:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:15.199 16:24:49 -- nvmf/common.sh@120 -- # set +e 00:13:15.199 16:24:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:15.199 16:24:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:15.199 rmmod nvme_tcp 00:13:15.199 rmmod nvme_fabrics 00:13:15.199 rmmod nvme_keyring 00:13:15.199 16:24:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:15.199 16:24:49 -- nvmf/common.sh@124 -- # set -e 00:13:15.199 16:24:49 -- nvmf/common.sh@125 -- # return 0 00:13:15.199 16:24:49 -- nvmf/common.sh@478 -- # '[' -n 75498 ']' 00:13:15.199 16:24:49 -- nvmf/common.sh@479 -- # killprocess 75498 00:13:15.199 16:24:49 -- common/autotest_common.sh@936 -- # '[' -z 75498 ']' 00:13:15.199 16:24:49 -- common/autotest_common.sh@940 -- # kill -0 75498 00:13:15.199 16:24:49 -- common/autotest_common.sh@941 -- # uname 00:13:15.199 16:24:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:15.199 16:24:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75498 00:13:15.199 killing process with pid 75498 00:13:15.199 16:24:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:15.199 16:24:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:15.199 16:24:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75498' 00:13:15.199 16:24:49 -- common/autotest_common.sh@955 -- # kill 75498 00:13:15.199 16:24:49 -- common/autotest_common.sh@960 -- # wait 75498 00:13:15.458 16:24:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:15.458 16:24:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:15.458 16:24:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:15.458 16:24:49 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:15.458 16:24:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:15.458 16:24:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.458 16:24:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:15.458 16:24:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.458 16:24:49 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:15.458 00:13:15.458 real 0m20.581s 00:13:15.458 user 1m19.747s 00:13:15.458 sys 0m8.648s 00:13:15.458 16:24:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:15.458 16:24:49 -- common/autotest_common.sh@10 -- # set +x 00:13:15.458 ************************************ 00:13:15.458 END TEST nvmf_fio_target 00:13:15.458 ************************************ 00:13:15.458 16:24:49 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:15.458 16:24:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:15.458 16:24:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:15.458 16:24:49 -- common/autotest_common.sh@10 -- # set +x 00:13:15.719 ************************************ 00:13:15.719 START TEST nvmf_bdevio 00:13:15.719 ************************************ 00:13:15.719 16:24:49 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:15.719 * Looking for test storage... 00:13:15.719 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:15.719 16:24:49 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:15.719 16:24:49 -- nvmf/common.sh@7 -- # uname -s 00:13:15.719 16:24:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:15.719 16:24:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:15.719 16:24:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:15.719 16:24:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:15.719 16:24:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:15.719 16:24:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:15.719 16:24:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:15.719 16:24:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:15.719 16:24:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:15.719 16:24:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:15.719 16:24:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:13:15.719 16:24:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:13:15.719 16:24:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:15.719 16:24:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:15.719 16:24:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:15.719 16:24:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:15.719 16:24:49 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:15.719 16:24:49 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.719 16:24:49 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.719 16:24:49 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.719 16:24:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.719 16:24:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.719 16:24:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.719 16:24:49 -- paths/export.sh@5 -- # export PATH 00:13:15.719 16:24:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.719 16:24:49 -- nvmf/common.sh@47 -- # : 0 00:13:15.719 16:24:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:15.719 16:24:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:15.719 16:24:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:15.719 16:24:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:15.719 16:24:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:15.719 16:24:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:15.719 16:24:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:15.719 16:24:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:15.719 16:24:49 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:15.719 16:24:49 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:15.719 16:24:49 -- target/bdevio.sh@14 -- # nvmftestinit 00:13:15.719 16:24:49 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:15.719 16:24:49 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:15.719 16:24:49 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:15.719 16:24:49 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:15.719 16:24:49 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:15.719 16:24:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.719 16:24:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:15.720 16:24:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.720 16:24:49 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:13:15.720 16:24:49 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:13:15.720 16:24:49 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:13:15.720 16:24:49 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:13:15.720 16:24:49 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:13:15.720 16:24:49 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:13:15.720 16:24:49 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:15.720 16:24:49 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:15.720 16:24:49 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:15.720 16:24:49 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:15.720 16:24:49 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:15.720 16:24:49 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:15.720 16:24:49 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:15.720 16:24:49 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:15.720 16:24:49 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:15.720 16:24:49 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:15.720 16:24:49 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:15.720 16:24:49 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:15.720 16:24:49 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:15.720 16:24:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:15.720 Cannot find device "nvmf_tgt_br" 00:13:15.720 16:24:49 -- nvmf/common.sh@155 -- # true 00:13:15.720 16:24:49 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:15.720 Cannot find device "nvmf_tgt_br2" 00:13:15.720 16:24:49 -- nvmf/common.sh@156 -- # true 00:13:15.720 16:24:49 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:15.720 16:24:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:15.978 Cannot find device "nvmf_tgt_br" 00:13:15.978 16:24:49 -- nvmf/common.sh@158 -- # true 00:13:15.978 16:24:49 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:15.978 Cannot find device "nvmf_tgt_br2" 00:13:15.978 16:24:49 -- nvmf/common.sh@159 -- # true 00:13:15.978 16:24:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:15.978 16:24:49 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:15.978 16:24:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:15.978 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:15.979 16:24:49 -- nvmf/common.sh@162 -- # true 00:13:15.979 16:24:49 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:15.979 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:15.979 16:24:49 -- nvmf/common.sh@163 -- # true 00:13:15.979 16:24:49 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:15.979 16:24:49 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:15.979 16:24:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:15.979 16:24:49 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:15.979 16:24:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:15.979 16:24:49 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:15.979 16:24:49 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:15.979 16:24:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:15.979 16:24:49 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:15.979 16:24:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:15.979 16:24:49 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:15.979 16:24:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:15.979 16:24:49 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:15.979 16:24:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:15.979 16:24:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:15.979 16:24:49 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:15.979 16:24:49 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:15.979 16:24:49 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:15.979 16:24:49 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:15.979 16:24:50 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:15.979 16:24:50 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:16.238 16:24:50 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:16.238 16:24:50 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:16.238 16:24:50 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:16.238 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:16.238 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:13:16.238 00:13:16.238 --- 10.0.0.2 ping statistics --- 00:13:16.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.238 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:13:16.238 16:24:50 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:16.238 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:16.238 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:13:16.238 00:13:16.238 --- 10.0.0.3 ping statistics --- 00:13:16.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.238 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:13:16.238 16:24:50 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:16.238 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:16.238 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:13:16.238 00:13:16.238 --- 10.0.0.1 ping statistics --- 00:13:16.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.238 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:13:16.238 16:24:50 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:16.238 16:24:50 -- nvmf/common.sh@422 -- # return 0 00:13:16.238 16:24:50 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:16.238 16:24:50 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:16.238 16:24:50 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:16.238 16:24:50 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:16.238 16:24:50 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:16.238 16:24:50 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:16.238 16:24:50 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:16.238 16:24:50 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:16.238 16:24:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:16.238 16:24:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:16.238 16:24:50 -- common/autotest_common.sh@10 -- # set +x 00:13:16.238 16:24:50 -- nvmf/common.sh@470 -- # nvmfpid=76386 00:13:16.238 16:24:50 -- nvmf/common.sh@471 -- # waitforlisten 76386 00:13:16.238 16:24:50 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:13:16.238 16:24:50 -- common/autotest_common.sh@817 -- # '[' -z 76386 ']' 00:13:16.238 16:24:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.238 16:24:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:16.238 16:24:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.238 16:24:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:16.238 16:24:50 -- common/autotest_common.sh@10 -- # set +x 00:13:16.238 [2024-04-17 16:24:50.156425] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:13:16.238 [2024-04-17 16:24:50.157035] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:16.498 [2024-04-17 16:24:50.300089] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:16.498 [2024-04-17 16:24:50.452619] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:16.498 [2024-04-17 16:24:50.452701] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:16.498 [2024-04-17 16:24:50.452714] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:16.498 [2024-04-17 16:24:50.452722] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:16.498 [2024-04-17 16:24:50.452729] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:16.498 [2024-04-17 16:24:50.452963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:16.498 [2024-04-17 16:24:50.453660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:13:16.498 [2024-04-17 16:24:50.453745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:13:16.498 [2024-04-17 16:24:50.453746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:17.435 16:24:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:17.435 16:24:51 -- common/autotest_common.sh@850 -- # return 0 00:13:17.435 16:24:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:17.435 16:24:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:17.435 16:24:51 -- common/autotest_common.sh@10 -- # set +x 00:13:17.435 16:24:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:17.435 16:24:51 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:17.435 16:24:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:17.435 16:24:51 -- common/autotest_common.sh@10 -- # set +x 00:13:17.435 [2024-04-17 16:24:51.211681] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:17.435 16:24:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:17.435 16:24:51 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:17.435 16:24:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:17.435 16:24:51 -- common/autotest_common.sh@10 -- # set +x 00:13:17.435 Malloc0 00:13:17.435 16:24:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:17.435 16:24:51 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:17.435 16:24:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:17.435 16:24:51 -- common/autotest_common.sh@10 -- # set +x 00:13:17.435 16:24:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:17.435 16:24:51 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:17.435 16:24:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:17.435 16:24:51 -- common/autotest_common.sh@10 -- # set +x 00:13:17.435 16:24:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:17.435 16:24:51 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:17.435 16:24:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:17.435 16:24:51 -- common/autotest_common.sh@10 -- # set +x 00:13:17.435 [2024-04-17 16:24:51.296645] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:17.435 16:24:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:17.435 16:24:51 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:13:17.435 16:24:51 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:17.435 16:24:51 -- nvmf/common.sh@521 -- # config=() 00:13:17.435 16:24:51 -- nvmf/common.sh@521 -- # local subsystem config 00:13:17.435 16:24:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:17.435 16:24:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:17.435 { 00:13:17.435 "params": { 00:13:17.435 "name": "Nvme$subsystem", 00:13:17.435 "trtype": "$TEST_TRANSPORT", 00:13:17.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:17.435 "adrfam": "ipv4", 00:13:17.435 "trsvcid": "$NVMF_PORT", 00:13:17.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:17.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:17.435 "hdgst": ${hdgst:-false}, 00:13:17.435 "ddgst": ${ddgst:-false} 00:13:17.435 }, 00:13:17.435 "method": "bdev_nvme_attach_controller" 00:13:17.435 } 00:13:17.435 EOF 00:13:17.435 )") 00:13:17.435 16:24:51 -- nvmf/common.sh@543 -- # cat 00:13:17.435 16:24:51 -- nvmf/common.sh@545 -- # jq . 00:13:17.435 16:24:51 -- nvmf/common.sh@546 -- # IFS=, 00:13:17.435 16:24:51 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:17.435 "params": { 00:13:17.435 "name": "Nvme1", 00:13:17.435 "trtype": "tcp", 00:13:17.435 "traddr": "10.0.0.2", 00:13:17.435 "adrfam": "ipv4", 00:13:17.435 "trsvcid": "4420", 00:13:17.435 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:17.435 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:17.435 "hdgst": false, 00:13:17.435 "ddgst": false 00:13:17.435 }, 00:13:17.435 "method": "bdev_nvme_attach_controller" 00:13:17.435 }' 00:13:17.435 [2024-04-17 16:24:51.356319] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:13:17.435 [2024-04-17 16:24:51.356418] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76440 ] 00:13:17.694 [2024-04-17 16:24:51.495513] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:17.694 [2024-04-17 16:24:51.640896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:17.694 [2024-04-17 16:24:51.640992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:17.694 [2024-04-17 16:24:51.640998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.694 [2024-04-17 16:24:51.650434] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:13:17.694 [2024-04-17 16:24:51.650476] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:13:17.694 [2024-04-17 16:24:51.650492] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: /var/tmp/spdk.sock 00:13:17.953 [2024-04-17 16:24:51.824026] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: /var/tmp/spdk.sock 00:13:17.953 I/O targets: 00:13:17.953 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:17.953 00:13:17.953 00:13:17.953 CUnit - A unit testing framework for C - Version 2.1-3 00:13:17.953 http://cunit.sourceforge.net/ 00:13:17.953 00:13:17.953 00:13:17.953 Suite: bdevio tests on: Nvme1n1 00:13:17.954 Test: blockdev write read block ...passed 00:13:17.954 Test: blockdev write zeroes read block ...passed 00:13:17.954 Test: blockdev write zeroes read no split ...passed 00:13:17.954 Test: blockdev write zeroes read split ...passed 00:13:17.954 Test: blockdev write zeroes read split partial ...passed 00:13:17.954 Test: blockdev reset ...[2024-04-17 16:24:51.946888] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:17.954 [2024-04-17 16:24:51.947028] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1881500 (9): Bad file descriptor 00:13:17.954 [2024-04-17 16:24:51.963218] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:17.954 passed 00:13:17.954 Test: blockdev write read 8 blocks ...passed 00:13:17.954 Test: blockdev write read size > 128k ...passed 00:13:17.954 Test: blockdev write read invalid size ...passed 00:13:18.212 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:18.212 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:18.212 Test: blockdev write read max offset ...passed 00:13:18.212 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:18.212 Test: blockdev writev readv 8 blocks ...passed 00:13:18.212 Test: blockdev writev readv 30 x 1block ...passed 00:13:18.212 Test: blockdev writev readv block ...passed 00:13:18.212 Test: blockdev writev readv size > 128k ...passed 00:13:18.212 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:18.212 Test: blockdev comparev and writev ...[2024-04-17 16:24:52.136492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:18.212 [2024-04-17 16:24:52.136558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:18.212 [2024-04-17 16:24:52.136580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:18.212 [2024-04-17 16:24:52.136592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:18.212 [2024-04-17 16:24:52.137001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:18.212 [2024-04-17 16:24:52.137035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:18.212 [2024-04-17 16:24:52.137057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:18.212 [2024-04-17 16:24:52.137071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:18.212 [2024-04-17 16:24:52.137459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:18.213 [2024-04-17 16:24:52.137493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:18.213 [2024-04-17 16:24:52.137511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:18.213 [2024-04-17 16:24:52.137521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:18.213 [2024-04-17 16:24:52.138124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:18.213 [2024-04-17 16:24:52.138152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:18.213 [2024-04-17 16:24:52.138170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:18.213 [2024-04-17 16:24:52.138180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:18.213 passed 00:13:18.213 Test: blockdev nvme passthru rw ...passed 00:13:18.213 Test: blockdev nvme passthru vendor specific ...[2024-04-17 16:24:52.224545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:18.213 [2024-04-17 16:24:52.224660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:18.213 [2024-04-17 16:24:52.224994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:18.213 [2024-04-17 16:24:52.225031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:18.213 [2024-04-17 16:24:52.225223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:18.213 [2024-04-17 16:24:52.225249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:18.213 [2024-04-17 16:24:52.225508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:18.213 [2024-04-17 16:24:52.225543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:18.213 passed 00:13:18.213 Test: blockdev nvme admin passthru ...passed 00:13:18.472 Test: blockdev copy ...passed 00:13:18.472 00:13:18.472 Run Summary: Type Total Ran Passed Failed Inactive 00:13:18.472 suites 1 1 n/a 0 0 00:13:18.472 tests 23 23 23 0 0 00:13:18.472 asserts 152 152 152 0 n/a 00:13:18.472 00:13:18.472 Elapsed time = 0.907 seconds 00:13:18.766 16:24:52 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.766 16:24:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:18.766 16:24:52 -- common/autotest_common.sh@10 -- # set +x 00:13:18.766 16:24:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:18.766 16:24:52 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:18.766 16:24:52 -- target/bdevio.sh@30 -- # nvmftestfini 00:13:18.766 16:24:52 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:18.766 16:24:52 -- nvmf/common.sh@117 -- # sync 00:13:18.766 16:24:52 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:18.766 16:24:52 -- nvmf/common.sh@120 -- # set +e 00:13:18.766 16:24:52 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:18.766 16:24:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:18.766 rmmod nvme_tcp 00:13:18.766 rmmod nvme_fabrics 00:13:18.766 rmmod nvme_keyring 00:13:18.766 16:24:52 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:18.766 16:24:52 -- nvmf/common.sh@124 -- # set -e 00:13:18.766 16:24:52 -- nvmf/common.sh@125 -- # return 0 00:13:18.766 16:24:52 -- nvmf/common.sh@478 -- # '[' -n 76386 ']' 00:13:18.766 16:24:52 -- nvmf/common.sh@479 -- # killprocess 76386 00:13:18.766 16:24:52 -- common/autotest_common.sh@936 -- # '[' -z 76386 ']' 00:13:18.766 16:24:52 -- common/autotest_common.sh@940 -- # kill -0 76386 00:13:18.766 16:24:52 -- common/autotest_common.sh@941 -- # uname 00:13:18.766 16:24:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:18.766 16:24:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76386 00:13:18.766 16:24:52 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:13:18.766 16:24:52 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:13:18.766 killing process with pid 76386 00:13:18.766 16:24:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76386' 00:13:18.766 16:24:52 -- common/autotest_common.sh@955 -- # kill 76386 00:13:18.766 16:24:52 -- common/autotest_common.sh@960 -- # wait 76386 00:13:19.028 16:24:52 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:19.028 16:24:52 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:19.028 16:24:52 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:19.028 16:24:52 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:19.028 16:24:52 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:19.028 16:24:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:19.028 16:24:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:19.028 16:24:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.028 16:24:53 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:19.028 00:13:19.028 real 0m3.417s 00:13:19.028 user 0m11.978s 00:13:19.028 sys 0m0.886s 00:13:19.028 16:24:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:19.028 16:24:53 -- common/autotest_common.sh@10 -- # set +x 00:13:19.028 ************************************ 00:13:19.028 END TEST nvmf_bdevio 00:13:19.028 ************************************ 00:13:19.028 16:24:53 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:13:19.028 16:24:53 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:19.028 16:24:53 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:13:19.028 16:24:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:19.028 16:24:53 -- common/autotest_common.sh@10 -- # set +x 00:13:19.287 ************************************ 00:13:19.287 START TEST nvmf_bdevio_no_huge 00:13:19.287 ************************************ 00:13:19.287 16:24:53 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:19.287 * Looking for test storage... 00:13:19.287 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:19.287 16:24:53 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:19.287 16:24:53 -- nvmf/common.sh@7 -- # uname -s 00:13:19.287 16:24:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:19.287 16:24:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:19.287 16:24:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:19.287 16:24:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:19.287 16:24:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:19.287 16:24:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:19.287 16:24:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:19.287 16:24:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:19.287 16:24:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:19.287 16:24:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:19.287 16:24:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:13:19.287 16:24:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:13:19.287 16:24:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:19.287 16:24:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:19.287 16:24:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:19.287 16:24:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:19.287 16:24:53 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:19.287 16:24:53 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:19.287 16:24:53 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:19.287 16:24:53 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:19.287 16:24:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.287 16:24:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.287 16:24:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.287 16:24:53 -- paths/export.sh@5 -- # export PATH 00:13:19.287 16:24:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.287 16:24:53 -- nvmf/common.sh@47 -- # : 0 00:13:19.287 16:24:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:19.287 16:24:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:19.287 16:24:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:19.287 16:24:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:19.287 16:24:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:19.287 16:24:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:19.287 16:24:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:19.287 16:24:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:19.287 16:24:53 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:19.287 16:24:53 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:19.287 16:24:53 -- target/bdevio.sh@14 -- # nvmftestinit 00:13:19.287 16:24:53 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:19.287 16:24:53 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:19.287 16:24:53 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:19.287 16:24:53 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:19.287 16:24:53 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:19.287 16:24:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:19.287 16:24:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:19.287 16:24:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.287 16:24:53 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:13:19.287 16:24:53 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:13:19.287 16:24:53 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:13:19.287 16:24:53 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:13:19.287 16:24:53 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:13:19.287 16:24:53 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:13:19.287 16:24:53 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:19.287 16:24:53 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:19.287 16:24:53 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:19.287 16:24:53 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:19.287 16:24:53 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:19.287 16:24:53 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:19.287 16:24:53 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:19.287 16:24:53 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:19.287 16:24:53 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:19.287 16:24:53 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:19.287 16:24:53 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:19.287 16:24:53 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:19.287 16:24:53 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:19.287 16:24:53 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:19.287 Cannot find device "nvmf_tgt_br" 00:13:19.287 16:24:53 -- nvmf/common.sh@155 -- # true 00:13:19.287 16:24:53 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:19.287 Cannot find device "nvmf_tgt_br2" 00:13:19.287 16:24:53 -- nvmf/common.sh@156 -- # true 00:13:19.287 16:24:53 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:19.287 16:24:53 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:19.287 Cannot find device "nvmf_tgt_br" 00:13:19.288 16:24:53 -- nvmf/common.sh@158 -- # true 00:13:19.288 16:24:53 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:19.288 Cannot find device "nvmf_tgt_br2" 00:13:19.288 16:24:53 -- nvmf/common.sh@159 -- # true 00:13:19.288 16:24:53 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:19.547 16:24:53 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:19.547 16:24:53 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:19.547 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:19.547 16:24:53 -- nvmf/common.sh@162 -- # true 00:13:19.547 16:24:53 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:19.547 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:19.547 16:24:53 -- nvmf/common.sh@163 -- # true 00:13:19.547 16:24:53 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:19.547 16:24:53 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:19.547 16:24:53 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:19.547 16:24:53 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:19.547 16:24:53 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:19.547 16:24:53 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:19.547 16:24:53 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:19.547 16:24:53 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:19.547 16:24:53 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:19.547 16:24:53 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:19.547 16:24:53 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:19.547 16:24:53 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:19.547 16:24:53 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:19.547 16:24:53 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:19.547 16:24:53 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:19.547 16:24:53 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:19.547 16:24:53 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:19.547 16:24:53 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:19.547 16:24:53 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:19.547 16:24:53 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:19.806 16:24:53 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:19.806 16:24:53 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:19.806 16:24:53 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:19.806 16:24:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:19.806 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:19.806 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:13:19.806 00:13:19.806 --- 10.0.0.2 ping statistics --- 00:13:19.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.806 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:13:19.806 16:24:53 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:19.806 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:19.806 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:13:19.806 00:13:19.806 --- 10.0.0.3 ping statistics --- 00:13:19.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.806 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:13:19.806 16:24:53 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:19.806 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:19.806 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:13:19.806 00:13:19.806 --- 10.0.0.1 ping statistics --- 00:13:19.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.806 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:13:19.806 16:24:53 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:19.806 16:24:53 -- nvmf/common.sh@422 -- # return 0 00:13:19.806 16:24:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:19.806 16:24:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:19.806 16:24:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:19.806 16:24:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:19.806 16:24:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:19.806 16:24:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:19.806 16:24:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:19.806 16:24:53 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:19.806 16:24:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:19.806 16:24:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:19.806 16:24:53 -- common/autotest_common.sh@10 -- # set +x 00:13:19.806 16:24:53 -- nvmf/common.sh@470 -- # nvmfpid=76625 00:13:19.806 16:24:53 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:13:19.806 16:24:53 -- nvmf/common.sh@471 -- # waitforlisten 76625 00:13:19.806 16:24:53 -- common/autotest_common.sh@817 -- # '[' -z 76625 ']' 00:13:19.806 16:24:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.806 16:24:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:19.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.806 16:24:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.806 16:24:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:19.806 16:24:53 -- common/autotest_common.sh@10 -- # set +x 00:13:19.806 [2024-04-17 16:24:53.722308] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:13:19.806 [2024-04-17 16:24:53.722403] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:13:20.066 [2024-04-17 16:24:53.871855] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:20.066 [2024-04-17 16:24:54.078371] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:20.066 [2024-04-17 16:24:54.078438] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:20.066 [2024-04-17 16:24:54.078453] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:20.066 [2024-04-17 16:24:54.078464] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:20.066 [2024-04-17 16:24:54.078473] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:20.066 [2024-04-17 16:24:54.078663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:20.066 [2024-04-17 16:24:54.078876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:13:20.066 [2024-04-17 16:24:54.079471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:13:20.066 [2024-04-17 16:24:54.079481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:21.011 16:24:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:21.011 16:24:54 -- common/autotest_common.sh@850 -- # return 0 00:13:21.011 16:24:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:21.011 16:24:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:21.011 16:24:54 -- common/autotest_common.sh@10 -- # set +x 00:13:21.011 16:24:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:21.011 16:24:54 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:21.011 16:24:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:21.011 16:24:54 -- common/autotest_common.sh@10 -- # set +x 00:13:21.011 [2024-04-17 16:24:54.741874] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:21.011 16:24:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:21.011 16:24:54 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:21.011 16:24:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:21.011 16:24:54 -- common/autotest_common.sh@10 -- # set +x 00:13:21.011 Malloc0 00:13:21.011 16:24:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:21.011 16:24:54 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:21.011 16:24:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:21.011 16:24:54 -- common/autotest_common.sh@10 -- # set +x 00:13:21.011 16:24:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:21.011 16:24:54 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:21.011 16:24:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:21.011 16:24:54 -- common/autotest_common.sh@10 -- # set +x 00:13:21.011 16:24:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:21.011 16:24:54 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:21.011 16:24:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:21.011 16:24:54 -- common/autotest_common.sh@10 -- # set +x 00:13:21.011 [2024-04-17 16:24:54.782037] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:21.011 16:24:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:21.011 16:24:54 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:13:21.011 16:24:54 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:21.011 16:24:54 -- nvmf/common.sh@521 -- # config=() 00:13:21.011 16:24:54 -- nvmf/common.sh@521 -- # local subsystem config 00:13:21.011 16:24:54 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:21.011 16:24:54 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:21.011 { 00:13:21.011 "params": { 00:13:21.011 "name": "Nvme$subsystem", 00:13:21.011 "trtype": "$TEST_TRANSPORT", 00:13:21.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:21.011 "adrfam": "ipv4", 00:13:21.011 "trsvcid": "$NVMF_PORT", 00:13:21.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:21.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:21.011 "hdgst": ${hdgst:-false}, 00:13:21.011 "ddgst": ${ddgst:-false} 00:13:21.011 }, 00:13:21.011 "method": "bdev_nvme_attach_controller" 00:13:21.011 } 00:13:21.011 EOF 00:13:21.011 )") 00:13:21.011 16:24:54 -- nvmf/common.sh@543 -- # cat 00:13:21.011 16:24:54 -- nvmf/common.sh@545 -- # jq . 00:13:21.011 16:24:54 -- nvmf/common.sh@546 -- # IFS=, 00:13:21.011 16:24:54 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:21.011 "params": { 00:13:21.011 "name": "Nvme1", 00:13:21.011 "trtype": "tcp", 00:13:21.011 "traddr": "10.0.0.2", 00:13:21.011 "adrfam": "ipv4", 00:13:21.011 "trsvcid": "4420", 00:13:21.011 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:21.011 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:21.011 "hdgst": false, 00:13:21.011 "ddgst": false 00:13:21.011 }, 00:13:21.011 "method": "bdev_nvme_attach_controller" 00:13:21.011 }' 00:13:21.011 [2024-04-17 16:24:54.844823] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:13:21.011 [2024-04-17 16:24:54.844917] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid76679 ] 00:13:21.011 [2024-04-17 16:24:54.988582] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:21.270 [2024-04-17 16:24:55.131325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:21.270 [2024-04-17 16:24:55.131466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:21.270 [2024-04-17 16:24:55.131469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.270 [2024-04-17 16:24:55.140354] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:13:21.270 [2024-04-17 16:24:55.140386] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:13:21.270 [2024-04-17 16:24:55.140396] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: /var/tmp/spdk.sock 00:13:21.270 [2024-04-17 16:24:55.305084] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: /var/tmp/spdk.sock 00:13:21.270 I/O targets: 00:13:21.270 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:21.270 00:13:21.270 00:13:21.270 CUnit - A unit testing framework for C - Version 2.1-3 00:13:21.270 http://cunit.sourceforge.net/ 00:13:21.270 00:13:21.270 00:13:21.270 Suite: bdevio tests on: Nvme1n1 00:13:21.528 Test: blockdev write read block ...passed 00:13:21.528 Test: blockdev write zeroes read block ...passed 00:13:21.528 Test: blockdev write zeroes read no split ...passed 00:13:21.528 Test: blockdev write zeroes read split ...passed 00:13:21.528 Test: blockdev write zeroes read split partial ...passed 00:13:21.528 Test: blockdev reset ...[2024-04-17 16:24:55.437665] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:21.528 [2024-04-17 16:24:55.437794] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa4560 (9): Bad file descriptor 00:13:21.528 passed 00:13:21.528 Test: blockdev write read 8 blocks ...[2024-04-17 16:24:55.453895] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:21.528 passed 00:13:21.528 Test: blockdev write read size > 128k ...passed 00:13:21.528 Test: blockdev write read invalid size ...passed 00:13:21.528 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:21.528 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:21.528 Test: blockdev write read max offset ...passed 00:13:21.787 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:21.787 Test: blockdev writev readv 8 blocks ...passed 00:13:21.787 Test: blockdev writev readv 30 x 1block ...passed 00:13:21.787 Test: blockdev writev readv block ...passed 00:13:21.787 Test: blockdev writev readv size > 128k ...passed 00:13:21.787 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:21.787 Test: blockdev comparev and writev ...[2024-04-17 16:24:55.630021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:21.787 [2024-04-17 16:24:55.630083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:21.787 [2024-04-17 16:24:55.630104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:21.787 [2024-04-17 16:24:55.630116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:21.787 [2024-04-17 16:24:55.630412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:21.787 [2024-04-17 16:24:55.630429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:21.787 [2024-04-17 16:24:55.630447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:21.787 [2024-04-17 16:24:55.630458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:21.787 [2024-04-17 16:24:55.630731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:21.787 [2024-04-17 16:24:55.630747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:21.787 [2024-04-17 16:24:55.630764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:21.787 [2024-04-17 16:24:55.630799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:21.787 [2024-04-17 16:24:55.631090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:21.787 [2024-04-17 16:24:55.631107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:21.787 [2024-04-17 16:24:55.631123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:21.787 [2024-04-17 16:24:55.631133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:21.787 passed 00:13:21.787 Test: blockdev nvme passthru rw ...passed 00:13:21.787 Test: blockdev nvme passthru vendor specific ...[2024-04-17 16:24:55.718355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:21.787 [2024-04-17 16:24:55.718417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:21.787 [2024-04-17 16:24:55.718554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:21.787 [2024-04-17 16:24:55.718575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:21.787 [2024-04-17 16:24:55.718734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:21.787 [2024-04-17 16:24:55.718755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:21.787 [2024-04-17 16:24:55.718903] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:21.787 [2024-04-17 16:24:55.718925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:21.787 passed 00:13:21.787 Test: blockdev nvme admin passthru ...passed 00:13:21.787 Test: blockdev copy ...passed 00:13:21.787 00:13:21.787 Run Summary: Type Total Ran Passed Failed Inactive 00:13:21.787 suites 1 1 n/a 0 0 00:13:21.787 tests 23 23 23 0 0 00:13:21.787 asserts 152 152 152 0 n/a 00:13:21.787 00:13:21.787 Elapsed time = 0.948 seconds 00:13:22.353 16:24:56 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.353 16:24:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:22.353 16:24:56 -- common/autotest_common.sh@10 -- # set +x 00:13:22.353 16:24:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:22.353 16:24:56 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:22.353 16:24:56 -- target/bdevio.sh@30 -- # nvmftestfini 00:13:22.353 16:24:56 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:22.353 16:24:56 -- nvmf/common.sh@117 -- # sync 00:13:22.353 16:24:56 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:22.353 16:24:56 -- nvmf/common.sh@120 -- # set +e 00:13:22.353 16:24:56 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:22.353 16:24:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:22.353 rmmod nvme_tcp 00:13:22.353 rmmod nvme_fabrics 00:13:22.353 rmmod nvme_keyring 00:13:22.353 16:24:56 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:22.353 16:24:56 -- nvmf/common.sh@124 -- # set -e 00:13:22.353 16:24:56 -- nvmf/common.sh@125 -- # return 0 00:13:22.353 16:24:56 -- nvmf/common.sh@478 -- # '[' -n 76625 ']' 00:13:22.353 16:24:56 -- nvmf/common.sh@479 -- # killprocess 76625 00:13:22.353 16:24:56 -- common/autotest_common.sh@936 -- # '[' -z 76625 ']' 00:13:22.353 16:24:56 -- common/autotest_common.sh@940 -- # kill -0 76625 00:13:22.353 16:24:56 -- common/autotest_common.sh@941 -- # uname 00:13:22.353 16:24:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:22.353 16:24:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76625 00:13:22.353 16:24:56 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:13:22.353 16:24:56 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:13:22.353 killing process with pid 76625 00:13:22.353 16:24:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76625' 00:13:22.353 16:24:56 -- common/autotest_common.sh@955 -- # kill 76625 00:13:22.353 16:24:56 -- common/autotest_common.sh@960 -- # wait 76625 00:13:22.919 16:24:56 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:22.919 16:24:56 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:22.919 16:24:56 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:22.919 16:24:56 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:22.919 16:24:56 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:22.919 16:24:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:22.919 16:24:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:22.919 16:24:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:22.919 16:24:56 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:22.919 00:13:22.919 real 0m3.715s 00:13:22.919 user 0m12.846s 00:13:22.919 sys 0m1.449s 00:13:22.919 16:24:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:22.919 16:24:56 -- common/autotest_common.sh@10 -- # set +x 00:13:22.919 ************************************ 00:13:22.919 END TEST nvmf_bdevio_no_huge 00:13:22.919 ************************************ 00:13:22.919 16:24:56 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:22.919 16:24:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:22.919 16:24:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:22.919 16:24:56 -- common/autotest_common.sh@10 -- # set +x 00:13:23.178 ************************************ 00:13:23.178 START TEST nvmf_tls 00:13:23.178 ************************************ 00:13:23.178 16:24:56 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:23.178 * Looking for test storage... 00:13:23.178 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:23.178 16:24:57 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:23.178 16:24:57 -- nvmf/common.sh@7 -- # uname -s 00:13:23.178 16:24:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:23.178 16:24:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:23.178 16:24:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:23.178 16:24:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:23.178 16:24:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:23.178 16:24:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:23.178 16:24:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:23.178 16:24:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:23.178 16:24:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:23.178 16:24:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:23.178 16:24:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:13:23.178 16:24:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:13:23.178 16:24:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:23.178 16:24:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:23.178 16:24:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:23.178 16:24:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:23.178 16:24:57 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:23.178 16:24:57 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:23.178 16:24:57 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:23.178 16:24:57 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:23.178 16:24:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.178 16:24:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.178 16:24:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.178 16:24:57 -- paths/export.sh@5 -- # export PATH 00:13:23.178 16:24:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.178 16:24:57 -- nvmf/common.sh@47 -- # : 0 00:13:23.178 16:24:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:23.178 16:24:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:23.178 16:24:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:23.178 16:24:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:23.178 16:24:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:23.178 16:24:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:23.178 16:24:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:23.178 16:24:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:23.178 16:24:57 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:23.178 16:24:57 -- target/tls.sh@62 -- # nvmftestinit 00:13:23.178 16:24:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:23.178 16:24:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:23.178 16:24:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:23.178 16:24:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:23.178 16:24:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:23.178 16:24:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.178 16:24:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:23.178 16:24:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.178 16:24:57 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:13:23.178 16:24:57 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:13:23.178 16:24:57 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:13:23.178 16:24:57 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:13:23.178 16:24:57 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:13:23.178 16:24:57 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:13:23.178 16:24:57 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:23.178 16:24:57 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:23.178 16:24:57 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:23.178 16:24:57 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:23.178 16:24:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:23.178 16:24:57 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:23.178 16:24:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:23.178 16:24:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:23.178 16:24:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:23.178 16:24:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:23.178 16:24:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:23.178 16:24:57 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:23.178 16:24:57 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:23.178 16:24:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:23.178 Cannot find device "nvmf_tgt_br" 00:13:23.178 16:24:57 -- nvmf/common.sh@155 -- # true 00:13:23.178 16:24:57 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:23.178 Cannot find device "nvmf_tgt_br2" 00:13:23.178 16:24:57 -- nvmf/common.sh@156 -- # true 00:13:23.178 16:24:57 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:23.178 16:24:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:23.178 Cannot find device "nvmf_tgt_br" 00:13:23.178 16:24:57 -- nvmf/common.sh@158 -- # true 00:13:23.178 16:24:57 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:23.178 Cannot find device "nvmf_tgt_br2" 00:13:23.178 16:24:57 -- nvmf/common.sh@159 -- # true 00:13:23.178 16:24:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:23.178 16:24:57 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:23.178 16:24:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:23.178 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:23.178 16:24:57 -- nvmf/common.sh@162 -- # true 00:13:23.178 16:24:57 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:23.445 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:23.445 16:24:57 -- nvmf/common.sh@163 -- # true 00:13:23.445 16:24:57 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:23.445 16:24:57 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:23.445 16:24:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:23.445 16:24:57 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:23.445 16:24:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:23.445 16:24:57 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:23.445 16:24:57 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:23.445 16:24:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:23.445 16:24:57 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:23.445 16:24:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:23.445 16:24:57 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:23.445 16:24:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:23.445 16:24:57 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:23.445 16:24:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:23.445 16:24:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:23.445 16:24:57 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:23.445 16:24:57 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:23.445 16:24:57 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:23.445 16:24:57 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:23.445 16:24:57 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:23.445 16:24:57 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:23.445 16:24:57 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:23.445 16:24:57 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:23.445 16:24:57 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:23.445 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:23.445 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:13:23.445 00:13:23.445 --- 10.0.0.2 ping statistics --- 00:13:23.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.445 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:13:23.445 16:24:57 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:23.445 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:23.445 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:13:23.445 00:13:23.445 --- 10.0.0.3 ping statistics --- 00:13:23.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.445 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:13:23.445 16:24:57 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:23.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:23.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:13:23.445 00:13:23.445 --- 10.0.0.1 ping statistics --- 00:13:23.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.445 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:13:23.445 16:24:57 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:23.445 16:24:57 -- nvmf/common.sh@422 -- # return 0 00:13:23.445 16:24:57 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:23.445 16:24:57 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:23.445 16:24:57 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:23.445 16:24:57 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:23.445 16:24:57 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:23.445 16:24:57 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:23.445 16:24:57 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:23.445 16:24:57 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:13:23.445 16:24:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:23.445 16:24:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:23.445 16:24:57 -- common/autotest_common.sh@10 -- # set +x 00:13:23.445 16:24:57 -- nvmf/common.sh@470 -- # nvmfpid=76870 00:13:23.445 16:24:57 -- nvmf/common.sh@471 -- # waitforlisten 76870 00:13:23.445 16:24:57 -- common/autotest_common.sh@817 -- # '[' -z 76870 ']' 00:13:23.445 16:24:57 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:13:23.445 16:24:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.445 16:24:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:23.445 16:24:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.445 16:24:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:23.445 16:24:57 -- common/autotest_common.sh@10 -- # set +x 00:13:23.702 [2024-04-17 16:24:57.536029] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:13:23.702 [2024-04-17 16:24:57.536123] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:23.702 [2024-04-17 16:24:57.677398] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.961 [2024-04-17 16:24:57.801077] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:23.961 [2024-04-17 16:24:57.801439] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:23.961 [2024-04-17 16:24:57.801557] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:23.961 [2024-04-17 16:24:57.801673] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:23.961 [2024-04-17 16:24:57.801817] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:23.962 [2024-04-17 16:24:57.801998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:24.528 16:24:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:24.528 16:24:58 -- common/autotest_common.sh@850 -- # return 0 00:13:24.528 16:24:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:24.528 16:24:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:24.528 16:24:58 -- common/autotest_common.sh@10 -- # set +x 00:13:24.529 16:24:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:24.529 16:24:58 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:13:24.529 16:24:58 -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:13:24.788 true 00:13:24.788 16:24:58 -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:24.788 16:24:58 -- target/tls.sh@73 -- # jq -r .tls_version 00:13:25.048 16:24:59 -- target/tls.sh@73 -- # version=0 00:13:25.048 16:24:59 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:13:25.048 16:24:59 -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:25.617 16:24:59 -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:25.617 16:24:59 -- target/tls.sh@81 -- # jq -r .tls_version 00:13:25.875 16:24:59 -- target/tls.sh@81 -- # version=13 00:13:25.875 16:24:59 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:13:25.875 16:24:59 -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:13:26.133 16:25:00 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:26.133 16:25:00 -- target/tls.sh@89 -- # jq -r .tls_version 00:13:26.391 16:25:00 -- target/tls.sh@89 -- # version=7 00:13:26.391 16:25:00 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:13:26.391 16:25:00 -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:26.391 16:25:00 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:13:26.649 16:25:00 -- target/tls.sh@96 -- # ktls=false 00:13:26.649 16:25:00 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:13:26.649 16:25:00 -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:13:26.907 16:25:00 -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:26.907 16:25:00 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:13:27.165 16:25:01 -- target/tls.sh@104 -- # ktls=true 00:13:27.165 16:25:01 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:13:27.165 16:25:01 -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:13:27.422 16:25:01 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:27.422 16:25:01 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:13:27.988 16:25:01 -- target/tls.sh@112 -- # ktls=false 00:13:27.988 16:25:01 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:13:27.988 16:25:01 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:13:27.988 16:25:01 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:13:27.988 16:25:01 -- nvmf/common.sh@691 -- # local prefix key digest 00:13:27.988 16:25:01 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:13:27.988 16:25:01 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:13:27.988 16:25:01 -- nvmf/common.sh@693 -- # digest=1 00:13:27.988 16:25:01 -- nvmf/common.sh@694 -- # python - 00:13:27.988 16:25:01 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:27.988 16:25:01 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:13:27.988 16:25:01 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:13:27.988 16:25:01 -- nvmf/common.sh@691 -- # local prefix key digest 00:13:27.988 16:25:01 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:13:27.988 16:25:01 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:13:27.988 16:25:01 -- nvmf/common.sh@693 -- # digest=1 00:13:27.988 16:25:01 -- nvmf/common.sh@694 -- # python - 00:13:27.988 16:25:01 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:27.988 16:25:01 -- target/tls.sh@121 -- # mktemp 00:13:27.988 16:25:01 -- target/tls.sh@121 -- # key_path=/tmp/tmp.VPlsmugWVR 00:13:27.988 16:25:01 -- target/tls.sh@122 -- # mktemp 00:13:27.988 16:25:01 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.Zt1HImE6Hl 00:13:27.988 16:25:01 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:27.988 16:25:01 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:27.988 16:25:01 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.VPlsmugWVR 00:13:27.988 16:25:01 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.Zt1HImE6Hl 00:13:27.988 16:25:01 -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:28.246 16:25:02 -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:13:28.836 16:25:02 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.VPlsmugWVR 00:13:28.836 16:25:02 -- target/tls.sh@49 -- # local key=/tmp/tmp.VPlsmugWVR 00:13:28.836 16:25:02 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:29.108 [2024-04-17 16:25:02.895704] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:29.108 16:25:02 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:29.367 16:25:03 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:29.367 [2024-04-17 16:25:03.383834] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:29.367 [2024-04-17 16:25:03.384062] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:29.367 16:25:03 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:29.625 malloc0 00:13:29.625 16:25:03 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:29.883 16:25:03 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.VPlsmugWVR 00:13:30.142 [2024-04-17 16:25:04.175984] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:30.400 16:25:04 -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.VPlsmugWVR 00:13:40.402 Initializing NVMe Controllers 00:13:40.402 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:40.402 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:40.402 Initialization complete. Launching workers. 00:13:40.402 ======================================================== 00:13:40.402 Latency(us) 00:13:40.402 Device Information : IOPS MiB/s Average min max 00:13:40.402 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9140.19 35.70 7003.80 1565.92 9289.81 00:13:40.402 ======================================================== 00:13:40.403 Total : 9140.19 35.70 7003.80 1565.92 9289.81 00:13:40.403 00:13:40.403 16:25:14 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VPlsmugWVR 00:13:40.403 16:25:14 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:40.403 16:25:14 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:40.403 16:25:14 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:40.403 16:25:14 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.VPlsmugWVR' 00:13:40.403 16:25:14 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:40.403 16:25:14 -- target/tls.sh@28 -- # bdevperf_pid=77241 00:13:40.403 16:25:14 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:40.403 16:25:14 -- target/tls.sh@31 -- # waitforlisten 77241 /var/tmp/bdevperf.sock 00:13:40.403 16:25:14 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:40.403 16:25:14 -- common/autotest_common.sh@817 -- # '[' -z 77241 ']' 00:13:40.403 16:25:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:40.403 16:25:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:40.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:40.403 16:25:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:40.403 16:25:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:40.403 16:25:14 -- common/autotest_common.sh@10 -- # set +x 00:13:40.403 [2024-04-17 16:25:14.437218] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:13:40.403 [2024-04-17 16:25:14.437302] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77241 ] 00:13:40.662 [2024-04-17 16:25:14.574284] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.662 [2024-04-17 16:25:14.703232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:41.598 16:25:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:41.598 16:25:15 -- common/autotest_common.sh@850 -- # return 0 00:13:41.598 16:25:15 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.VPlsmugWVR 00:13:41.857 [2024-04-17 16:25:15.724498] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:41.857 [2024-04-17 16:25:15.724614] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:41.857 TLSTESTn1 00:13:41.857 16:25:15 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:42.115 Running I/O for 10 seconds... 00:13:52.094 00:13:52.094 Latency(us) 00:13:52.094 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:52.094 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:52.094 Verification LBA range: start 0x0 length 0x2000 00:13:52.094 TLSTESTn1 : 10.03 3667.65 14.33 0.00 0.00 34822.29 7566.43 41466.41 00:13:52.094 =================================================================================================================== 00:13:52.094 Total : 3667.65 14.33 0.00 0.00 34822.29 7566.43 41466.41 00:13:52.094 0 00:13:52.094 16:25:25 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:52.094 16:25:25 -- target/tls.sh@45 -- # killprocess 77241 00:13:52.094 16:25:25 -- common/autotest_common.sh@936 -- # '[' -z 77241 ']' 00:13:52.094 16:25:25 -- common/autotest_common.sh@940 -- # kill -0 77241 00:13:52.094 16:25:25 -- common/autotest_common.sh@941 -- # uname 00:13:52.094 16:25:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:52.094 16:25:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77241 00:13:52.094 killing process with pid 77241 00:13:52.094 Received shutdown signal, test time was about 10.000000 seconds 00:13:52.094 00:13:52.094 Latency(us) 00:13:52.094 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:52.094 =================================================================================================================== 00:13:52.094 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:52.094 16:25:26 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:13:52.094 16:25:26 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:13:52.094 16:25:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77241' 00:13:52.094 16:25:26 -- common/autotest_common.sh@955 -- # kill 77241 00:13:52.094 [2024-04-17 16:25:26.017996] app.c: 930:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:52.094 16:25:26 -- common/autotest_common.sh@960 -- # wait 77241 00:13:52.353 16:25:26 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Zt1HImE6Hl 00:13:52.353 16:25:26 -- common/autotest_common.sh@638 -- # local es=0 00:13:52.353 16:25:26 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Zt1HImE6Hl 00:13:52.353 16:25:26 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:13:52.353 16:25:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:52.353 16:25:26 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:13:52.353 16:25:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:52.353 16:25:26 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Zt1HImE6Hl 00:13:52.353 16:25:26 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:52.353 16:25:26 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:52.353 16:25:26 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:52.353 16:25:26 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Zt1HImE6Hl' 00:13:52.353 16:25:26 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:52.353 16:25:26 -- target/tls.sh@28 -- # bdevperf_pid=77392 00:13:52.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:52.353 16:25:26 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:52.353 16:25:26 -- target/tls.sh@31 -- # waitforlisten 77392 /var/tmp/bdevperf.sock 00:13:52.353 16:25:26 -- common/autotest_common.sh@817 -- # '[' -z 77392 ']' 00:13:52.353 16:25:26 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:52.353 16:25:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:52.353 16:25:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:52.353 16:25:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:52.353 16:25:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:52.353 16:25:26 -- common/autotest_common.sh@10 -- # set +x 00:13:52.353 [2024-04-17 16:25:26.353632] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:13:52.353 [2024-04-17 16:25:26.353756] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77392 ] 00:13:52.613 [2024-04-17 16:25:26.494187] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.613 [2024-04-17 16:25:26.621597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:53.547 16:25:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:53.547 16:25:27 -- common/autotest_common.sh@850 -- # return 0 00:13:53.547 16:25:27 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Zt1HImE6Hl 00:13:53.805 [2024-04-17 16:25:27.651342] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:53.805 [2024-04-17 16:25:27.651493] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:53.805 [2024-04-17 16:25:27.656773] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:53.805 [2024-04-17 16:25:27.657352] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24819c0 (107): Transport endpoint is not connected 00:13:53.805 [2024-04-17 16:25:27.658336] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24819c0 (9): Bad file descriptor 00:13:53.805 [2024-04-17 16:25:27.659332] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:53.805 [2024-04-17 16:25:27.659379] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:53.805 [2024-04-17 16:25:27.659395] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:53.805 2024/04/17 16:25:27 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.Zt1HImE6Hl subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:13:53.805 request: 00:13:53.805 { 00:13:53.805 "method": "bdev_nvme_attach_controller", 00:13:53.805 "params": { 00:13:53.805 "name": "TLSTEST", 00:13:53.805 "trtype": "tcp", 00:13:53.805 "traddr": "10.0.0.2", 00:13:53.805 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:53.805 "adrfam": "ipv4", 00:13:53.805 "trsvcid": "4420", 00:13:53.805 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:53.805 "psk": "/tmp/tmp.Zt1HImE6Hl" 00:13:53.805 } 00:13:53.805 } 00:13:53.805 Got JSON-RPC error response 00:13:53.805 GoRPCClient: error on JSON-RPC call 00:13:53.805 16:25:27 -- target/tls.sh@36 -- # killprocess 77392 00:13:53.805 16:25:27 -- common/autotest_common.sh@936 -- # '[' -z 77392 ']' 00:13:53.805 16:25:27 -- common/autotest_common.sh@940 -- # kill -0 77392 00:13:53.805 16:25:27 -- common/autotest_common.sh@941 -- # uname 00:13:53.805 16:25:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:53.805 16:25:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77392 00:13:53.805 killing process with pid 77392 00:13:53.805 Received shutdown signal, test time was about 10.000000 seconds 00:13:53.805 00:13:53.805 Latency(us) 00:13:53.805 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:53.805 =================================================================================================================== 00:13:53.805 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:53.805 16:25:27 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:13:53.805 16:25:27 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:13:53.805 16:25:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77392' 00:13:53.805 16:25:27 -- common/autotest_common.sh@955 -- # kill 77392 00:13:53.805 [2024-04-17 16:25:27.710802] app.c: 930:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:53.805 16:25:27 -- common/autotest_common.sh@960 -- # wait 77392 00:13:54.064 16:25:27 -- target/tls.sh@37 -- # return 1 00:13:54.064 16:25:27 -- common/autotest_common.sh@641 -- # es=1 00:13:54.064 16:25:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:54.064 16:25:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:54.065 16:25:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:54.065 16:25:27 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.VPlsmugWVR 00:13:54.065 16:25:27 -- common/autotest_common.sh@638 -- # local es=0 00:13:54.065 16:25:27 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.VPlsmugWVR 00:13:54.065 16:25:27 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:13:54.065 16:25:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:54.065 16:25:27 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:13:54.065 16:25:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:54.065 16:25:27 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.VPlsmugWVR 00:13:54.065 16:25:27 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:54.065 16:25:27 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:54.065 16:25:27 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:13:54.065 16:25:27 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.VPlsmugWVR' 00:13:54.065 16:25:27 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:54.065 16:25:27 -- target/tls.sh@28 -- # bdevperf_pid=77438 00:13:54.065 16:25:27 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:54.065 16:25:27 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:54.065 16:25:27 -- target/tls.sh@31 -- # waitforlisten 77438 /var/tmp/bdevperf.sock 00:13:54.065 16:25:27 -- common/autotest_common.sh@817 -- # '[' -z 77438 ']' 00:13:54.065 16:25:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:54.065 16:25:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:54.065 16:25:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:54.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:54.065 16:25:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:54.065 16:25:27 -- common/autotest_common.sh@10 -- # set +x 00:13:54.065 [2024-04-17 16:25:28.042488] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:13:54.065 [2024-04-17 16:25:28.042914] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77438 ] 00:13:54.324 [2024-04-17 16:25:28.179870] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.324 [2024-04-17 16:25:28.306944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:55.261 16:25:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:55.261 16:25:29 -- common/autotest_common.sh@850 -- # return 0 00:13:55.262 16:25:29 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.VPlsmugWVR 00:13:55.262 [2024-04-17 16:25:29.282755] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:55.262 [2024-04-17 16:25:29.282892] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:55.262 [2024-04-17 16:25:29.287820] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:55.262 [2024-04-17 16:25:29.287859] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:55.262 [2024-04-17 16:25:29.287934] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:55.262 [2024-04-17 16:25:29.288509] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x72f9c0 (107): Transport endpoint is not connected 00:13:55.262 [2024-04-17 16:25:29.289497] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x72f9c0 (9): Bad file descriptor 00:13:55.262 [2024-04-17 16:25:29.290494] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:55.262 [2024-04-17 16:25:29.290544] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:55.262 [2024-04-17 16:25:29.290558] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:55.262 2024/04/17 16:25:29 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/tmp/tmp.VPlsmugWVR subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:13:55.262 request: 00:13:55.262 { 00:13:55.262 "method": "bdev_nvme_attach_controller", 00:13:55.262 "params": { 00:13:55.262 "name": "TLSTEST", 00:13:55.262 "trtype": "tcp", 00:13:55.262 "traddr": "10.0.0.2", 00:13:55.262 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:13:55.262 "adrfam": "ipv4", 00:13:55.262 "trsvcid": "4420", 00:13:55.262 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:55.262 "psk": "/tmp/tmp.VPlsmugWVR" 00:13:55.262 } 00:13:55.262 } 00:13:55.262 Got JSON-RPC error response 00:13:55.262 GoRPCClient: error on JSON-RPC call 00:13:55.548 16:25:29 -- target/tls.sh@36 -- # killprocess 77438 00:13:55.548 16:25:29 -- common/autotest_common.sh@936 -- # '[' -z 77438 ']' 00:13:55.548 16:25:29 -- common/autotest_common.sh@940 -- # kill -0 77438 00:13:55.548 16:25:29 -- common/autotest_common.sh@941 -- # uname 00:13:55.548 16:25:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:55.548 16:25:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77438 00:13:55.548 16:25:29 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:13:55.548 16:25:29 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:13:55.548 killing process with pid 77438 00:13:55.548 16:25:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77438' 00:13:55.548 16:25:29 -- common/autotest_common.sh@955 -- # kill 77438 00:13:55.548 Received shutdown signal, test time was about 10.000000 seconds 00:13:55.548 00:13:55.548 Latency(us) 00:13:55.548 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:55.548 =================================================================================================================== 00:13:55.548 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:55.548 [2024-04-17 16:25:29.340692] app.c: 930:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:55.548 16:25:29 -- common/autotest_common.sh@960 -- # wait 77438 00:13:55.806 16:25:29 -- target/tls.sh@37 -- # return 1 00:13:55.806 16:25:29 -- common/autotest_common.sh@641 -- # es=1 00:13:55.806 16:25:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:55.806 16:25:29 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:55.806 16:25:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:55.806 16:25:29 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.VPlsmugWVR 00:13:55.806 16:25:29 -- common/autotest_common.sh@638 -- # local es=0 00:13:55.806 16:25:29 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.VPlsmugWVR 00:13:55.806 16:25:29 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:13:55.806 16:25:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:55.806 16:25:29 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:13:55.806 16:25:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:55.806 16:25:29 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.VPlsmugWVR 00:13:55.806 16:25:29 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:55.806 16:25:29 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:13:55.806 16:25:29 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:55.806 16:25:29 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.VPlsmugWVR' 00:13:55.806 16:25:29 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:55.806 16:25:29 -- target/tls.sh@28 -- # bdevperf_pid=77488 00:13:55.806 16:25:29 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:55.806 16:25:29 -- target/tls.sh@31 -- # waitforlisten 77488 /var/tmp/bdevperf.sock 00:13:55.806 16:25:29 -- common/autotest_common.sh@817 -- # '[' -z 77488 ']' 00:13:55.806 16:25:29 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:55.806 16:25:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:55.806 16:25:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:55.806 16:25:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:55.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:55.806 16:25:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:55.806 16:25:29 -- common/autotest_common.sh@10 -- # set +x 00:13:55.806 [2024-04-17 16:25:29.673872] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:13:55.806 [2024-04-17 16:25:29.674010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77488 ] 00:13:55.806 [2024-04-17 16:25:29.827242] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.064 [2024-04-17 16:25:29.974215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:56.996 16:25:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:56.996 16:25:30 -- common/autotest_common.sh@850 -- # return 0 00:13:56.996 16:25:30 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.VPlsmugWVR 00:13:57.255 [2024-04-17 16:25:31.054072] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:57.255 [2024-04-17 16:25:31.054289] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:57.255 [2024-04-17 16:25:31.062129] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:57.255 [2024-04-17 16:25:31.062177] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:57.255 [2024-04-17 16:25:31.062242] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:57.255 [2024-04-17 16:25:31.062636] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22a29c0 (107): Transport endpoint is not connected 00:13:57.255 [2024-04-17 16:25:31.063620] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22a29c0 (9): Bad file descriptor 00:13:57.255 [2024-04-17 16:25:31.064604] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:13:57.255 [2024-04-17 16:25:31.064665] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:57.255 [2024-04-17 16:25:31.064690] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:13:57.255 2024/04/17 16:25:31 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.VPlsmugWVR subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:13:57.255 request: 00:13:57.255 { 00:13:57.255 "method": "bdev_nvme_attach_controller", 00:13:57.255 "params": { 00:13:57.255 "name": "TLSTEST", 00:13:57.255 "trtype": "tcp", 00:13:57.255 "traddr": "10.0.0.2", 00:13:57.255 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:57.255 "adrfam": "ipv4", 00:13:57.255 "trsvcid": "4420", 00:13:57.255 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:13:57.255 "psk": "/tmp/tmp.VPlsmugWVR" 00:13:57.255 } 00:13:57.255 } 00:13:57.255 Got JSON-RPC error response 00:13:57.255 GoRPCClient: error on JSON-RPC call 00:13:57.255 16:25:31 -- target/tls.sh@36 -- # killprocess 77488 00:13:57.255 16:25:31 -- common/autotest_common.sh@936 -- # '[' -z 77488 ']' 00:13:57.255 16:25:31 -- common/autotest_common.sh@940 -- # kill -0 77488 00:13:57.255 16:25:31 -- common/autotest_common.sh@941 -- # uname 00:13:57.255 16:25:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:57.255 16:25:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77488 00:13:57.255 16:25:31 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:13:57.255 16:25:31 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:13:57.255 killing process with pid 77488 00:13:57.255 16:25:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77488' 00:13:57.255 Received shutdown signal, test time was about 10.000000 seconds 00:13:57.255 00:13:57.255 Latency(us) 00:13:57.255 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.255 =================================================================================================================== 00:13:57.255 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:57.255 16:25:31 -- common/autotest_common.sh@955 -- # kill 77488 00:13:57.255 [2024-04-17 16:25:31.120896] app.c: 930:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' 16:25:31 -- common/autotest_common.sh@960 -- # wait 77488 00:13:57.255 scheduled for removal in v24.09 hit 1 times 00:13:57.513 16:25:31 -- target/tls.sh@37 -- # return 1 00:13:57.513 16:25:31 -- common/autotest_common.sh@641 -- # es=1 00:13:57.513 16:25:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:57.513 16:25:31 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:57.513 16:25:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:57.514 16:25:31 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:57.514 16:25:31 -- common/autotest_common.sh@638 -- # local es=0 00:13:57.514 16:25:31 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:57.514 16:25:31 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:13:57.514 16:25:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:57.514 16:25:31 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:13:57.514 16:25:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:57.514 16:25:31 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:57.514 16:25:31 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:57.514 16:25:31 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:57.514 16:25:31 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:57.514 16:25:31 -- target/tls.sh@23 -- # psk= 00:13:57.514 16:25:31 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:57.514 16:25:31 -- target/tls.sh@28 -- # bdevperf_pid=77529 00:13:57.514 16:25:31 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:57.514 16:25:31 -- target/tls.sh@31 -- # waitforlisten 77529 /var/tmp/bdevperf.sock 00:13:57.514 16:25:31 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:57.514 16:25:31 -- common/autotest_common.sh@817 -- # '[' -z 77529 ']' 00:13:57.514 16:25:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:57.514 16:25:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:57.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:57.514 16:25:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:57.514 16:25:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:57.514 16:25:31 -- common/autotest_common.sh@10 -- # set +x 00:13:57.514 [2024-04-17 16:25:31.468120] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:13:57.514 [2024-04-17 16:25:31.468230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77529 ] 00:13:57.772 [2024-04-17 16:25:31.607015] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.772 [2024-04-17 16:25:31.738034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:58.707 16:25:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:58.707 16:25:32 -- common/autotest_common.sh@850 -- # return 0 00:13:58.707 16:25:32 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:13:58.966 [2024-04-17 16:25:32.777775] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:58.966 [2024-04-17 16:25:32.779023] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x149edc0 (9): Bad file descriptor 00:13:58.966 [2024-04-17 16:25:32.780017] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:58.966 [2024-04-17 16:25:32.780062] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:58.966 [2024-04-17 16:25:32.780075] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:58.966 2024/04/17 16:25:32 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:13:58.966 request: 00:13:58.966 { 00:13:58.966 "method": "bdev_nvme_attach_controller", 00:13:58.966 "params": { 00:13:58.966 "name": "TLSTEST", 00:13:58.966 "trtype": "tcp", 00:13:58.966 "traddr": "10.0.0.2", 00:13:58.966 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:58.966 "adrfam": "ipv4", 00:13:58.966 "trsvcid": "4420", 00:13:58.966 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:13:58.966 } 00:13:58.966 } 00:13:58.966 Got JSON-RPC error response 00:13:58.966 GoRPCClient: error on JSON-RPC call 00:13:58.966 16:25:32 -- target/tls.sh@36 -- # killprocess 77529 00:13:58.966 16:25:32 -- common/autotest_common.sh@936 -- # '[' -z 77529 ']' 00:13:58.966 16:25:32 -- common/autotest_common.sh@940 -- # kill -0 77529 00:13:58.966 16:25:32 -- common/autotest_common.sh@941 -- # uname 00:13:58.966 16:25:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:58.966 16:25:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77529 00:13:58.966 killing process with pid 77529 00:13:58.966 Received shutdown signal, test time was about 10.000000 seconds 00:13:58.966 00:13:58.966 Latency(us) 00:13:58.966 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:58.966 =================================================================================================================== 00:13:58.966 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:58.966 16:25:32 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:13:58.966 16:25:32 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:13:58.966 16:25:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77529' 00:13:58.966 16:25:32 -- common/autotest_common.sh@955 -- # kill 77529 00:13:58.966 16:25:32 -- common/autotest_common.sh@960 -- # wait 77529 00:13:59.225 16:25:33 -- target/tls.sh@37 -- # return 1 00:13:59.225 16:25:33 -- common/autotest_common.sh@641 -- # es=1 00:13:59.225 16:25:33 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:59.225 16:25:33 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:59.225 16:25:33 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:59.225 16:25:33 -- target/tls.sh@158 -- # killprocess 76870 00:13:59.225 16:25:33 -- common/autotest_common.sh@936 -- # '[' -z 76870 ']' 00:13:59.225 16:25:33 -- common/autotest_common.sh@940 -- # kill -0 76870 00:13:59.225 16:25:33 -- common/autotest_common.sh@941 -- # uname 00:13:59.225 16:25:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:59.225 16:25:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76870 00:13:59.225 killing process with pid 76870 00:13:59.225 16:25:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:59.225 16:25:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:59.225 16:25:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76870' 00:13:59.225 16:25:33 -- common/autotest_common.sh@955 -- # kill 76870 00:13:59.225 [2024-04-17 16:25:33.112905] app.c: 930:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:59.225 16:25:33 -- common/autotest_common.sh@960 -- # wait 76870 00:13:59.483 16:25:33 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:13:59.483 16:25:33 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:13:59.483 16:25:33 -- nvmf/common.sh@691 -- # local prefix key digest 00:13:59.483 16:25:33 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:13:59.483 16:25:33 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:13:59.483 16:25:33 -- nvmf/common.sh@693 -- # digest=2 00:13:59.483 16:25:33 -- nvmf/common.sh@694 -- # python - 00:13:59.483 16:25:33 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:59.483 16:25:33 -- target/tls.sh@160 -- # mktemp 00:13:59.483 16:25:33 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.7ZxTJDjXUS 00:13:59.483 16:25:33 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:59.483 16:25:33 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.7ZxTJDjXUS 00:13:59.483 16:25:33 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:13:59.483 16:25:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:59.483 16:25:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:59.483 16:25:33 -- common/autotest_common.sh@10 -- # set +x 00:13:59.483 16:25:33 -- nvmf/common.sh@470 -- # nvmfpid=77590 00:13:59.483 16:25:33 -- nvmf/common.sh@471 -- # waitforlisten 77590 00:13:59.483 16:25:33 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:59.483 16:25:33 -- common/autotest_common.sh@817 -- # '[' -z 77590 ']' 00:13:59.483 16:25:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.483 16:25:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:59.484 16:25:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.484 16:25:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:59.484 16:25:33 -- common/autotest_common.sh@10 -- # set +x 00:13:59.484 [2024-04-17 16:25:33.514125] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:13:59.484 [2024-04-17 16:25:33.514534] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:59.741 [2024-04-17 16:25:33.659209] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.999 [2024-04-17 16:25:33.801582] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:59.999 [2024-04-17 16:25:33.801650] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:59.999 [2024-04-17 16:25:33.801664] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:59.999 [2024-04-17 16:25:33.801675] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:59.999 [2024-04-17 16:25:33.801684] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:59.999 [2024-04-17 16:25:33.801724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:00.565 16:25:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:00.565 16:25:34 -- common/autotest_common.sh@850 -- # return 0 00:14:00.565 16:25:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:00.565 16:25:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:00.565 16:25:34 -- common/autotest_common.sh@10 -- # set +x 00:14:00.823 16:25:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:00.823 16:25:34 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.7ZxTJDjXUS 00:14:00.823 16:25:34 -- target/tls.sh@49 -- # local key=/tmp/tmp.7ZxTJDjXUS 00:14:00.823 16:25:34 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:01.082 [2024-04-17 16:25:34.954398] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:01.082 16:25:34 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:01.341 16:25:35 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:01.601 [2024-04-17 16:25:35.534531] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:01.601 [2024-04-17 16:25:35.534842] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:01.601 16:25:35 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:01.859 malloc0 00:14:01.859 16:25:35 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:02.117 16:25:36 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7ZxTJDjXUS 00:14:02.377 [2024-04-17 16:25:36.343666] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:02.377 16:25:36 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7ZxTJDjXUS 00:14:02.377 16:25:36 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:02.377 16:25:36 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:02.377 16:25:36 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:02.377 16:25:36 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.7ZxTJDjXUS' 00:14:02.377 16:25:36 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:02.377 16:25:36 -- target/tls.sh@28 -- # bdevperf_pid=77693 00:14:02.377 16:25:36 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:02.377 16:25:36 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:02.377 16:25:36 -- target/tls.sh@31 -- # waitforlisten 77693 /var/tmp/bdevperf.sock 00:14:02.377 16:25:36 -- common/autotest_common.sh@817 -- # '[' -z 77693 ']' 00:14:02.377 16:25:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:02.377 16:25:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:02.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:02.377 16:25:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:02.377 16:25:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:02.377 16:25:36 -- common/autotest_common.sh@10 -- # set +x 00:14:02.637 [2024-04-17 16:25:36.425226] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:14:02.637 [2024-04-17 16:25:36.425309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77693 ] 00:14:02.637 [2024-04-17 16:25:36.562262] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.895 [2024-04-17 16:25:36.700413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:03.462 16:25:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:03.462 16:25:37 -- common/autotest_common.sh@850 -- # return 0 00:14:03.462 16:25:37 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7ZxTJDjXUS 00:14:03.749 [2024-04-17 16:25:37.752614] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:03.749 [2024-04-17 16:25:37.752732] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:04.007 TLSTESTn1 00:14:04.007 16:25:37 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:04.007 Running I/O for 10 seconds... 00:14:13.983 00:14:13.983 Latency(us) 00:14:13.983 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:13.983 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:13.983 Verification LBA range: start 0x0 length 0x2000 00:14:13.983 TLSTESTn1 : 10.03 3802.75 14.85 0.00 0.00 33588.12 7745.16 28716.68 00:14:13.983 =================================================================================================================== 00:14:13.983 Total : 3802.75 14.85 0.00 0.00 33588.12 7745.16 28716.68 00:14:13.983 0 00:14:13.983 16:25:48 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:13.983 16:25:48 -- target/tls.sh@45 -- # killprocess 77693 00:14:13.983 16:25:48 -- common/autotest_common.sh@936 -- # '[' -z 77693 ']' 00:14:13.983 16:25:48 -- common/autotest_common.sh@940 -- # kill -0 77693 00:14:13.983 16:25:48 -- common/autotest_common.sh@941 -- # uname 00:14:14.241 16:25:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:14.241 16:25:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77693 00:14:14.241 16:25:48 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:14.241 16:25:48 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:14.241 killing process with pid 77693 00:14:14.241 16:25:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77693' 00:14:14.241 Received shutdown signal, test time was about 10.000000 seconds 00:14:14.241 00:14:14.241 Latency(us) 00:14:14.241 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:14.241 =================================================================================================================== 00:14:14.241 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:14.241 16:25:48 -- common/autotest_common.sh@955 -- # kill 77693 00:14:14.241 [2024-04-17 16:25:48.057320] app.c: 930:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:14.241 16:25:48 -- common/autotest_common.sh@960 -- # wait 77693 00:14:14.499 16:25:48 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.7ZxTJDjXUS 00:14:14.499 16:25:48 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7ZxTJDjXUS 00:14:14.499 16:25:48 -- common/autotest_common.sh@638 -- # local es=0 00:14:14.499 16:25:48 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7ZxTJDjXUS 00:14:14.499 16:25:48 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:14:14.499 16:25:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:14.499 16:25:48 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:14:14.499 16:25:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:14.499 16:25:48 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7ZxTJDjXUS 00:14:14.499 16:25:48 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:14.499 16:25:48 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:14.499 16:25:48 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:14.499 16:25:48 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.7ZxTJDjXUS' 00:14:14.499 16:25:48 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:14.499 16:25:48 -- target/tls.sh@28 -- # bdevperf_pid=77852 00:14:14.499 16:25:48 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:14.499 16:25:48 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:14.499 16:25:48 -- target/tls.sh@31 -- # waitforlisten 77852 /var/tmp/bdevperf.sock 00:14:14.499 16:25:48 -- common/autotest_common.sh@817 -- # '[' -z 77852 ']' 00:14:14.499 16:25:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:14.499 16:25:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:14.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:14.499 16:25:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:14.499 16:25:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:14.499 16:25:48 -- common/autotest_common.sh@10 -- # set +x 00:14:14.499 [2024-04-17 16:25:48.411694] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:14:14.499 [2024-04-17 16:25:48.411862] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77852 ] 00:14:14.758 [2024-04-17 16:25:48.545802] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.758 [2024-04-17 16:25:48.695136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:15.714 16:25:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:15.714 16:25:49 -- common/autotest_common.sh@850 -- # return 0 00:14:15.714 16:25:49 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7ZxTJDjXUS 00:14:15.973 [2024-04-17 16:25:49.914450] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:15.973 [2024-04-17 16:25:49.914560] bdev_nvme.c:6046:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:14:15.973 [2024-04-17 16:25:49.914576] bdev_nvme.c:6155:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.7ZxTJDjXUS 00:14:15.973 2024/04/17 16:25:49 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.7ZxTJDjXUS subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-1 Msg=Operation not permitted 00:14:15.973 request: 00:14:15.973 { 00:14:15.973 "method": "bdev_nvme_attach_controller", 00:14:15.973 "params": { 00:14:15.973 "name": "TLSTEST", 00:14:15.973 "trtype": "tcp", 00:14:15.973 "traddr": "10.0.0.2", 00:14:15.973 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:15.973 "adrfam": "ipv4", 00:14:15.973 "trsvcid": "4420", 00:14:15.973 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:15.973 "psk": "/tmp/tmp.7ZxTJDjXUS" 00:14:15.973 } 00:14:15.973 } 00:14:15.973 Got JSON-RPC error response 00:14:15.973 GoRPCClient: error on JSON-RPC call 00:14:15.973 16:25:49 -- target/tls.sh@36 -- # killprocess 77852 00:14:15.973 16:25:49 -- common/autotest_common.sh@936 -- # '[' -z 77852 ']' 00:14:15.973 16:25:49 -- common/autotest_common.sh@940 -- # kill -0 77852 00:14:15.973 16:25:49 -- common/autotest_common.sh@941 -- # uname 00:14:15.973 16:25:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:15.973 16:25:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77852 00:14:15.973 killing process with pid 77852 00:14:15.973 Received shutdown signal, test time was about 10.000000 seconds 00:14:15.973 00:14:15.973 Latency(us) 00:14:15.973 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:15.973 =================================================================================================================== 00:14:15.973 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:15.973 16:25:49 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:15.973 16:25:49 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:15.973 16:25:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77852' 00:14:15.973 16:25:49 -- common/autotest_common.sh@955 -- # kill 77852 00:14:15.973 16:25:49 -- common/autotest_common.sh@960 -- # wait 77852 00:14:16.232 16:25:50 -- target/tls.sh@37 -- # return 1 00:14:16.232 16:25:50 -- common/autotest_common.sh@641 -- # es=1 00:14:16.232 16:25:50 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:16.232 16:25:50 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:16.232 16:25:50 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:16.232 16:25:50 -- target/tls.sh@174 -- # killprocess 77590 00:14:16.232 16:25:50 -- common/autotest_common.sh@936 -- # '[' -z 77590 ']' 00:14:16.232 16:25:50 -- common/autotest_common.sh@940 -- # kill -0 77590 00:14:16.232 16:25:50 -- common/autotest_common.sh@941 -- # uname 00:14:16.232 16:25:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:16.232 16:25:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77590 00:14:16.232 killing process with pid 77590 00:14:16.232 16:25:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:16.232 16:25:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:16.232 16:25:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77590' 00:14:16.232 16:25:50 -- common/autotest_common.sh@955 -- # kill 77590 00:14:16.232 [2024-04-17 16:25:50.261146] app.c: 930:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:16.232 16:25:50 -- common/autotest_common.sh@960 -- # wait 77590 00:14:16.798 16:25:50 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:14:16.799 16:25:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:16.799 16:25:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:16.799 16:25:50 -- common/autotest_common.sh@10 -- # set +x 00:14:16.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.799 16:25:50 -- nvmf/common.sh@470 -- # nvmfpid=77908 00:14:16.799 16:25:50 -- nvmf/common.sh@471 -- # waitforlisten 77908 00:14:16.799 16:25:50 -- common/autotest_common.sh@817 -- # '[' -z 77908 ']' 00:14:16.799 16:25:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.799 16:25:50 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:16.799 16:25:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:16.799 16:25:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.799 16:25:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:16.799 16:25:50 -- common/autotest_common.sh@10 -- # set +x 00:14:16.799 [2024-04-17 16:25:50.631676] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:14:16.799 [2024-04-17 16:25:50.631870] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.799 [2024-04-17 16:25:50.783067] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.057 [2024-04-17 16:25:50.924608] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:17.057 [2024-04-17 16:25:50.924685] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:17.057 [2024-04-17 16:25:50.924699] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:17.057 [2024-04-17 16:25:50.924710] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:17.057 [2024-04-17 16:25:50.924720] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:17.057 [2024-04-17 16:25:50.924755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:17.993 16:25:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:17.993 16:25:51 -- common/autotest_common.sh@850 -- # return 0 00:14:17.993 16:25:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:17.993 16:25:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:17.993 16:25:51 -- common/autotest_common.sh@10 -- # set +x 00:14:17.993 16:25:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:17.993 16:25:51 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.7ZxTJDjXUS 00:14:17.993 16:25:51 -- common/autotest_common.sh@638 -- # local es=0 00:14:17.993 16:25:51 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.7ZxTJDjXUS 00:14:17.993 16:25:51 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:14:17.993 16:25:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:17.993 16:25:51 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:14:17.993 16:25:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:17.993 16:25:51 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.7ZxTJDjXUS 00:14:17.993 16:25:51 -- target/tls.sh@49 -- # local key=/tmp/tmp.7ZxTJDjXUS 00:14:17.993 16:25:51 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:17.993 [2024-04-17 16:25:52.037371] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:18.252 16:25:52 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:18.510 16:25:52 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:18.769 [2024-04-17 16:25:52.561503] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:18.769 [2024-04-17 16:25:52.561748] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:18.769 16:25:52 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:18.769 malloc0 00:14:19.028 16:25:52 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:19.286 16:25:53 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7ZxTJDjXUS 00:14:19.287 [2024-04-17 16:25:53.325848] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:14:19.287 [2024-04-17 16:25:53.325899] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:14:19.287 [2024-04-17 16:25:53.325926] subsystem.c: 967:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:14:19.287 2024/04/17 16:25:53 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/tmp/tmp.7ZxTJDjXUS], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:14:19.287 request: 00:14:19.287 { 00:14:19.287 "method": "nvmf_subsystem_add_host", 00:14:19.287 "params": { 00:14:19.287 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:19.287 "host": "nqn.2016-06.io.spdk:host1", 00:14:19.287 "psk": "/tmp/tmp.7ZxTJDjXUS" 00:14:19.287 } 00:14:19.287 } 00:14:19.287 Got JSON-RPC error response 00:14:19.287 GoRPCClient: error on JSON-RPC call 00:14:19.544 16:25:53 -- common/autotest_common.sh@641 -- # es=1 00:14:19.545 16:25:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:19.545 16:25:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:19.545 16:25:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:19.545 16:25:53 -- target/tls.sh@180 -- # killprocess 77908 00:14:19.545 16:25:53 -- common/autotest_common.sh@936 -- # '[' -z 77908 ']' 00:14:19.545 16:25:53 -- common/autotest_common.sh@940 -- # kill -0 77908 00:14:19.545 16:25:53 -- common/autotest_common.sh@941 -- # uname 00:14:19.545 16:25:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:19.545 16:25:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77908 00:14:19.545 16:25:53 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:19.545 16:25:53 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:19.545 killing process with pid 77908 00:14:19.545 16:25:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77908' 00:14:19.545 16:25:53 -- common/autotest_common.sh@955 -- # kill 77908 00:14:19.545 16:25:53 -- common/autotest_common.sh@960 -- # wait 77908 00:14:19.803 16:25:53 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.7ZxTJDjXUS 00:14:19.803 16:25:53 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:14:19.803 16:25:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:19.803 16:25:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:19.803 16:25:53 -- common/autotest_common.sh@10 -- # set +x 00:14:19.803 16:25:53 -- nvmf/common.sh@470 -- # nvmfpid=78024 00:14:19.803 16:25:53 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:19.803 16:25:53 -- nvmf/common.sh@471 -- # waitforlisten 78024 00:14:19.803 16:25:53 -- common/autotest_common.sh@817 -- # '[' -z 78024 ']' 00:14:19.803 16:25:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.803 16:25:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:19.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.803 16:25:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.803 16:25:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:19.803 16:25:53 -- common/autotest_common.sh@10 -- # set +x 00:14:19.803 [2024-04-17 16:25:53.700735] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:14:19.803 [2024-04-17 16:25:53.700847] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:19.803 [2024-04-17 16:25:53.833705] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.062 [2024-04-17 16:25:53.947719] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:20.062 [2024-04-17 16:25:53.947803] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:20.062 [2024-04-17 16:25:53.947816] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:20.062 [2024-04-17 16:25:53.947825] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:20.062 [2024-04-17 16:25:53.947834] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:20.062 [2024-04-17 16:25:53.947861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:20.997 16:25:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:20.997 16:25:54 -- common/autotest_common.sh@850 -- # return 0 00:14:20.997 16:25:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:20.997 16:25:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:20.997 16:25:54 -- common/autotest_common.sh@10 -- # set +x 00:14:20.997 16:25:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:20.997 16:25:54 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.7ZxTJDjXUS 00:14:20.997 16:25:54 -- target/tls.sh@49 -- # local key=/tmp/tmp.7ZxTJDjXUS 00:14:20.997 16:25:54 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:20.997 [2024-04-17 16:25:54.945517] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:20.997 16:25:54 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:21.256 16:25:55 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:21.515 [2024-04-17 16:25:55.433574] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:21.515 [2024-04-17 16:25:55.433827] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:21.515 16:25:55 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:21.773 malloc0 00:14:21.773 16:25:55 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:22.031 16:25:55 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7ZxTJDjXUS 00:14:22.290 [2024-04-17 16:25:56.213031] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:22.290 16:25:56 -- target/tls.sh@188 -- # bdevperf_pid=78128 00:14:22.290 16:25:56 -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:22.290 16:25:56 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:22.290 16:25:56 -- target/tls.sh@191 -- # waitforlisten 78128 /var/tmp/bdevperf.sock 00:14:22.290 16:25:56 -- common/autotest_common.sh@817 -- # '[' -z 78128 ']' 00:14:22.290 16:25:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:22.290 16:25:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:22.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:22.290 16:25:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:22.290 16:25:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:22.290 16:25:56 -- common/autotest_common.sh@10 -- # set +x 00:14:22.290 [2024-04-17 16:25:56.279613] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:14:22.290 [2024-04-17 16:25:56.279712] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78128 ] 00:14:22.548 [2024-04-17 16:25:56.420031] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.548 [2024-04-17 16:25:56.582210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:23.483 16:25:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:23.483 16:25:57 -- common/autotest_common.sh@850 -- # return 0 00:14:23.483 16:25:57 -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7ZxTJDjXUS 00:14:23.483 [2024-04-17 16:25:57.500221] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:23.483 [2024-04-17 16:25:57.500339] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:23.740 TLSTESTn1 00:14:23.740 16:25:57 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:23.998 16:25:57 -- target/tls.sh@196 -- # tgtconf='{ 00:14:23.998 "subsystems": [ 00:14:23.998 { 00:14:23.998 "subsystem": "keyring", 00:14:23.998 "config": [] 00:14:23.998 }, 00:14:23.998 { 00:14:23.998 "subsystem": "iobuf", 00:14:23.998 "config": [ 00:14:23.998 { 00:14:23.998 "method": "iobuf_set_options", 00:14:23.998 "params": { 00:14:23.998 "large_bufsize": 135168, 00:14:23.998 "large_pool_count": 1024, 00:14:23.998 "small_bufsize": 8192, 00:14:23.998 "small_pool_count": 8192 00:14:23.998 } 00:14:23.998 } 00:14:23.998 ] 00:14:23.998 }, 00:14:23.998 { 00:14:23.998 "subsystem": "sock", 00:14:23.998 "config": [ 00:14:23.998 { 00:14:23.998 "method": "sock_impl_set_options", 00:14:23.998 "params": { 00:14:23.998 "enable_ktls": false, 00:14:23.998 "enable_placement_id": 0, 00:14:23.998 "enable_quickack": false, 00:14:23.998 "enable_recv_pipe": true, 00:14:23.998 "enable_zerocopy_send_client": false, 00:14:23.998 "enable_zerocopy_send_server": true, 00:14:23.998 "impl_name": "posix", 00:14:23.998 "recv_buf_size": 2097152, 00:14:23.998 "send_buf_size": 2097152, 00:14:23.998 "tls_version": 0, 00:14:23.998 "zerocopy_threshold": 0 00:14:23.998 } 00:14:23.998 }, 00:14:23.998 { 00:14:23.998 "method": "sock_impl_set_options", 00:14:23.998 "params": { 00:14:23.998 "enable_ktls": false, 00:14:23.998 "enable_placement_id": 0, 00:14:23.998 "enable_quickack": false, 00:14:23.998 "enable_recv_pipe": true, 00:14:23.998 "enable_zerocopy_send_client": false, 00:14:23.998 "enable_zerocopy_send_server": true, 00:14:23.998 "impl_name": "ssl", 00:14:23.998 "recv_buf_size": 4096, 00:14:23.998 "send_buf_size": 4096, 00:14:23.998 "tls_version": 0, 00:14:23.998 "zerocopy_threshold": 0 00:14:23.998 } 00:14:23.998 } 00:14:23.998 ] 00:14:23.998 }, 00:14:23.998 { 00:14:23.998 "subsystem": "vmd", 00:14:23.998 "config": [] 00:14:23.998 }, 00:14:23.998 { 00:14:23.998 "subsystem": "accel", 00:14:23.998 "config": [ 00:14:23.998 { 00:14:23.998 "method": "accel_set_options", 00:14:23.998 "params": { 00:14:23.998 "buf_count": 2048, 00:14:23.998 "large_cache_size": 16, 00:14:23.998 "sequence_count": 2048, 00:14:23.998 "small_cache_size": 128, 00:14:23.998 "task_count": 2048 00:14:23.998 } 00:14:23.998 } 00:14:23.998 ] 00:14:23.998 }, 00:14:23.998 { 00:14:23.998 "subsystem": "bdev", 00:14:23.998 "config": [ 00:14:23.998 { 00:14:23.998 "method": "bdev_set_options", 00:14:23.998 "params": { 00:14:23.998 "bdev_auto_examine": true, 00:14:23.998 "bdev_io_cache_size": 256, 00:14:23.998 "bdev_io_pool_size": 65535, 00:14:23.999 "iobuf_large_cache_size": 16, 00:14:23.999 "iobuf_small_cache_size": 128 00:14:23.999 } 00:14:23.999 }, 00:14:23.999 { 00:14:23.999 "method": "bdev_raid_set_options", 00:14:23.999 "params": { 00:14:23.999 "process_window_size_kb": 1024 00:14:23.999 } 00:14:23.999 }, 00:14:23.999 { 00:14:23.999 "method": "bdev_iscsi_set_options", 00:14:23.999 "params": { 00:14:23.999 "timeout_sec": 30 00:14:23.999 } 00:14:23.999 }, 00:14:23.999 { 00:14:23.999 "method": "bdev_nvme_set_options", 00:14:23.999 "params": { 00:14:23.999 "action_on_timeout": "none", 00:14:23.999 "allow_accel_sequence": false, 00:14:23.999 "arbitration_burst": 0, 00:14:23.999 "bdev_retry_count": 3, 00:14:23.999 "ctrlr_loss_timeout_sec": 0, 00:14:23.999 "delay_cmd_submit": true, 00:14:23.999 "dhchap_dhgroups": [ 00:14:23.999 "null", 00:14:23.999 "ffdhe2048", 00:14:23.999 "ffdhe3072", 00:14:23.999 "ffdhe4096", 00:14:23.999 "ffdhe6144", 00:14:23.999 "ffdhe8192" 00:14:23.999 ], 00:14:23.999 "dhchap_digests": [ 00:14:23.999 "sha256", 00:14:23.999 "sha384", 00:14:23.999 "sha512" 00:14:23.999 ], 00:14:23.999 "disable_auto_failback": false, 00:14:23.999 "fast_io_fail_timeout_sec": 0, 00:14:23.999 "generate_uuids": false, 00:14:23.999 "high_priority_weight": 0, 00:14:23.999 "io_path_stat": false, 00:14:23.999 "io_queue_requests": 0, 00:14:23.999 "keep_alive_timeout_ms": 10000, 00:14:23.999 "low_priority_weight": 0, 00:14:23.999 "medium_priority_weight": 0, 00:14:23.999 "nvme_adminq_poll_period_us": 10000, 00:14:23.999 "nvme_error_stat": false, 00:14:23.999 "nvme_ioq_poll_period_us": 0, 00:14:23.999 "rdma_cm_event_timeout_ms": 0, 00:14:23.999 "rdma_max_cq_size": 0, 00:14:23.999 "rdma_srq_size": 0, 00:14:23.999 "reconnect_delay_sec": 0, 00:14:23.999 "timeout_admin_us": 0, 00:14:23.999 "timeout_us": 0, 00:14:23.999 "transport_ack_timeout": 0, 00:14:23.999 "transport_retry_count": 4, 00:14:23.999 "transport_tos": 0 00:14:23.999 } 00:14:23.999 }, 00:14:23.999 { 00:14:23.999 "method": "bdev_nvme_set_hotplug", 00:14:23.999 "params": { 00:14:23.999 "enable": false, 00:14:23.999 "period_us": 100000 00:14:23.999 } 00:14:23.999 }, 00:14:23.999 { 00:14:23.999 "method": "bdev_malloc_create", 00:14:23.999 "params": { 00:14:23.999 "block_size": 4096, 00:14:23.999 "name": "malloc0", 00:14:23.999 "num_blocks": 8192, 00:14:23.999 "optimal_io_boundary": 0, 00:14:23.999 "physical_block_size": 4096, 00:14:23.999 "uuid": "7c66b64e-7afb-4a25-ac88-5bd1d62a2b84" 00:14:23.999 } 00:14:23.999 }, 00:14:23.999 { 00:14:23.999 "method": "bdev_wait_for_examine" 00:14:23.999 } 00:14:23.999 ] 00:14:23.999 }, 00:14:23.999 { 00:14:23.999 "subsystem": "nbd", 00:14:23.999 "config": [] 00:14:23.999 }, 00:14:23.999 { 00:14:23.999 "subsystem": "scheduler", 00:14:23.999 "config": [ 00:14:23.999 { 00:14:23.999 "method": "framework_set_scheduler", 00:14:23.999 "params": { 00:14:23.999 "name": "static" 00:14:23.999 } 00:14:23.999 } 00:14:23.999 ] 00:14:23.999 }, 00:14:23.999 { 00:14:23.999 "subsystem": "nvmf", 00:14:23.999 "config": [ 00:14:23.999 { 00:14:23.999 "method": "nvmf_set_config", 00:14:23.999 "params": { 00:14:23.999 "admin_cmd_passthru": { 00:14:23.999 "identify_ctrlr": false 00:14:23.999 }, 00:14:23.999 "discovery_filter": "match_any" 00:14:23.999 } 00:14:23.999 }, 00:14:23.999 { 00:14:23.999 "method": "nvmf_set_max_subsystems", 00:14:23.999 "params": { 00:14:23.999 "max_subsystems": 1024 00:14:23.999 } 00:14:23.999 }, 00:14:23.999 { 00:14:23.999 "method": "nvmf_set_crdt", 00:14:23.999 "params": { 00:14:23.999 "crdt1": 0, 00:14:23.999 "crdt2": 0, 00:14:23.999 "crdt3": 0 00:14:23.999 } 00:14:23.999 }, 00:14:23.999 { 00:14:23.999 "method": "nvmf_create_transport", 00:14:23.999 "params": { 00:14:23.999 "abort_timeout_sec": 1, 00:14:23.999 "ack_timeout": 0, 00:14:23.999 "buf_cache_size": 4294967295, 00:14:23.999 "c2h_success": false, 00:14:23.999 "dif_insert_or_strip": false, 00:14:23.999 "in_capsule_data_size": 4096, 00:14:23.999 "io_unit_size": 131072, 00:14:23.999 "max_aq_depth": 128, 00:14:23.999 "max_io_qpairs_per_ctrlr": 127, 00:14:23.999 "max_io_size": 131072, 00:14:23.999 "max_queue_depth": 128, 00:14:23.999 "num_shared_buffers": 511, 00:14:23.999 "sock_priority": 0, 00:14:23.999 "trtype": "TCP", 00:14:23.999 "zcopy": false 00:14:23.999 } 00:14:23.999 }, 00:14:23.999 { 00:14:23.999 "method": "nvmf_create_subsystem", 00:14:23.999 "params": { 00:14:23.999 "allow_any_host": false, 00:14:23.999 "ana_reporting": false, 00:14:23.999 "max_cntlid": 65519, 00:14:23.999 "max_namespaces": 10, 00:14:23.999 "min_cntlid": 1, 00:14:23.999 "model_number": "SPDK bdev Controller", 00:14:23.999 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:23.999 "serial_number": "SPDK00000000000001" 00:14:23.999 } 00:14:23.999 }, 00:14:23.999 { 00:14:23.999 "method": "nvmf_subsystem_add_host", 00:14:23.999 "params": { 00:14:23.999 "host": "nqn.2016-06.io.spdk:host1", 00:14:23.999 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:23.999 "psk": "/tmp/tmp.7ZxTJDjXUS" 00:14:23.999 } 00:14:23.999 }, 00:14:23.999 { 00:14:23.999 "method": "nvmf_subsystem_add_ns", 00:14:23.999 "params": { 00:14:23.999 "namespace": { 00:14:23.999 "bdev_name": "malloc0", 00:14:23.999 "nguid": "7C66B64E7AFB4A25AC885BD1D62A2B84", 00:14:23.999 "no_auto_visible": false, 00:14:23.999 "nsid": 1, 00:14:23.999 "uuid": "7c66b64e-7afb-4a25-ac88-5bd1d62a2b84" 00:14:23.999 }, 00:14:23.999 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:14:23.999 } 00:14:23.999 }, 00:14:23.999 { 00:14:23.999 "method": "nvmf_subsystem_add_listener", 00:14:23.999 "params": { 00:14:23.999 "listen_address": { 00:14:23.999 "adrfam": "IPv4", 00:14:23.999 "traddr": "10.0.0.2", 00:14:23.999 "trsvcid": "4420", 00:14:23.999 "trtype": "TCP" 00:14:23.999 }, 00:14:23.999 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:23.999 "secure_channel": true 00:14:23.999 } 00:14:23.999 } 00:14:23.999 ] 00:14:23.999 } 00:14:23.999 ] 00:14:23.999 }' 00:14:23.999 16:25:57 -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:24.258 16:25:58 -- target/tls.sh@197 -- # bdevperfconf='{ 00:14:24.258 "subsystems": [ 00:14:24.258 { 00:14:24.258 "subsystem": "keyring", 00:14:24.258 "config": [] 00:14:24.258 }, 00:14:24.258 { 00:14:24.258 "subsystem": "iobuf", 00:14:24.258 "config": [ 00:14:24.258 { 00:14:24.258 "method": "iobuf_set_options", 00:14:24.258 "params": { 00:14:24.258 "large_bufsize": 135168, 00:14:24.258 "large_pool_count": 1024, 00:14:24.258 "small_bufsize": 8192, 00:14:24.258 "small_pool_count": 8192 00:14:24.258 } 00:14:24.258 } 00:14:24.258 ] 00:14:24.258 }, 00:14:24.258 { 00:14:24.258 "subsystem": "sock", 00:14:24.258 "config": [ 00:14:24.258 { 00:14:24.258 "method": "sock_impl_set_options", 00:14:24.258 "params": { 00:14:24.258 "enable_ktls": false, 00:14:24.258 "enable_placement_id": 0, 00:14:24.258 "enable_quickack": false, 00:14:24.258 "enable_recv_pipe": true, 00:14:24.258 "enable_zerocopy_send_client": false, 00:14:24.258 "enable_zerocopy_send_server": true, 00:14:24.258 "impl_name": "posix", 00:14:24.258 "recv_buf_size": 2097152, 00:14:24.258 "send_buf_size": 2097152, 00:14:24.258 "tls_version": 0, 00:14:24.258 "zerocopy_threshold": 0 00:14:24.258 } 00:14:24.258 }, 00:14:24.258 { 00:14:24.258 "method": "sock_impl_set_options", 00:14:24.258 "params": { 00:14:24.258 "enable_ktls": false, 00:14:24.258 "enable_placement_id": 0, 00:14:24.258 "enable_quickack": false, 00:14:24.258 "enable_recv_pipe": true, 00:14:24.258 "enable_zerocopy_send_client": false, 00:14:24.258 "enable_zerocopy_send_server": true, 00:14:24.258 "impl_name": "ssl", 00:14:24.258 "recv_buf_size": 4096, 00:14:24.258 "send_buf_size": 4096, 00:14:24.258 "tls_version": 0, 00:14:24.258 "zerocopy_threshold": 0 00:14:24.258 } 00:14:24.258 } 00:14:24.258 ] 00:14:24.258 }, 00:14:24.258 { 00:14:24.258 "subsystem": "vmd", 00:14:24.258 "config": [] 00:14:24.258 }, 00:14:24.258 { 00:14:24.258 "subsystem": "accel", 00:14:24.258 "config": [ 00:14:24.258 { 00:14:24.258 "method": "accel_set_options", 00:14:24.258 "params": { 00:14:24.258 "buf_count": 2048, 00:14:24.258 "large_cache_size": 16, 00:14:24.258 "sequence_count": 2048, 00:14:24.258 "small_cache_size": 128, 00:14:24.258 "task_count": 2048 00:14:24.258 } 00:14:24.258 } 00:14:24.258 ] 00:14:24.258 }, 00:14:24.258 { 00:14:24.258 "subsystem": "bdev", 00:14:24.258 "config": [ 00:14:24.258 { 00:14:24.258 "method": "bdev_set_options", 00:14:24.258 "params": { 00:14:24.258 "bdev_auto_examine": true, 00:14:24.258 "bdev_io_cache_size": 256, 00:14:24.258 "bdev_io_pool_size": 65535, 00:14:24.258 "iobuf_large_cache_size": 16, 00:14:24.258 "iobuf_small_cache_size": 128 00:14:24.258 } 00:14:24.258 }, 00:14:24.258 { 00:14:24.258 "method": "bdev_raid_set_options", 00:14:24.258 "params": { 00:14:24.258 "process_window_size_kb": 1024 00:14:24.258 } 00:14:24.258 }, 00:14:24.258 { 00:14:24.258 "method": "bdev_iscsi_set_options", 00:14:24.258 "params": { 00:14:24.258 "timeout_sec": 30 00:14:24.258 } 00:14:24.258 }, 00:14:24.258 { 00:14:24.258 "method": "bdev_nvme_set_options", 00:14:24.258 "params": { 00:14:24.258 "action_on_timeout": "none", 00:14:24.258 "allow_accel_sequence": false, 00:14:24.258 "arbitration_burst": 0, 00:14:24.258 "bdev_retry_count": 3, 00:14:24.258 "ctrlr_loss_timeout_sec": 0, 00:14:24.258 "delay_cmd_submit": true, 00:14:24.258 "dhchap_dhgroups": [ 00:14:24.258 "null", 00:14:24.258 "ffdhe2048", 00:14:24.258 "ffdhe3072", 00:14:24.258 "ffdhe4096", 00:14:24.258 "ffdhe6144", 00:14:24.258 "ffdhe8192" 00:14:24.258 ], 00:14:24.258 "dhchap_digests": [ 00:14:24.258 "sha256", 00:14:24.258 "sha384", 00:14:24.258 "sha512" 00:14:24.258 ], 00:14:24.258 "disable_auto_failback": false, 00:14:24.258 "fast_io_fail_timeout_sec": 0, 00:14:24.258 "generate_uuids": false, 00:14:24.258 "high_priority_weight": 0, 00:14:24.258 "io_path_stat": false, 00:14:24.258 "io_queue_requests": 512, 00:14:24.258 "keep_alive_timeout_ms": 10000, 00:14:24.258 "low_priority_weight": 0, 00:14:24.258 "medium_priority_weight": 0, 00:14:24.258 "nvme_adminq_poll_period_us": 10000, 00:14:24.258 "nvme_error_stat": false, 00:14:24.258 "nvme_ioq_poll_period_us": 0, 00:14:24.258 "rdma_cm_event_timeout_ms": 0, 00:14:24.258 "rdma_max_cq_size": 0, 00:14:24.258 "rdma_srq_size": 0, 00:14:24.258 "reconnect_delay_sec": 0, 00:14:24.258 "timeout_admin_us": 0, 00:14:24.258 "timeout_us": 0, 00:14:24.258 "transport_ack_timeout": 0, 00:14:24.258 "transport_retry_count": 4, 00:14:24.258 "transport_tos": 0 00:14:24.258 } 00:14:24.258 }, 00:14:24.258 { 00:14:24.258 "method": "bdev_nvme_attach_controller", 00:14:24.258 "params": { 00:14:24.258 "adrfam": "IPv4", 00:14:24.258 "ctrlr_loss_timeout_sec": 0, 00:14:24.258 "ddgst": false, 00:14:24.258 "fast_io_fail_timeout_sec": 0, 00:14:24.258 "hdgst": false, 00:14:24.258 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:24.258 "name": "TLSTEST", 00:14:24.258 "prchk_guard": false, 00:14:24.258 "prchk_reftag": false, 00:14:24.258 "psk": "/tmp/tmp.7ZxTJDjXUS", 00:14:24.259 "reconnect_delay_sec": 0, 00:14:24.259 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:24.259 "traddr": "10.0.0.2", 00:14:24.259 "trsvcid": "4420", 00:14:24.259 "trtype": "TCP" 00:14:24.259 } 00:14:24.259 }, 00:14:24.259 { 00:14:24.259 "method": "bdev_nvme_set_hotplug", 00:14:24.259 "params": { 00:14:24.259 "enable": false, 00:14:24.259 "period_us": 100000 00:14:24.259 } 00:14:24.259 }, 00:14:24.259 { 00:14:24.259 "method": "bdev_wait_for_examine" 00:14:24.259 } 00:14:24.259 ] 00:14:24.259 }, 00:14:24.259 { 00:14:24.259 "subsystem": "nbd", 00:14:24.259 "config": [] 00:14:24.259 } 00:14:24.259 ] 00:14:24.259 }' 00:14:24.259 16:25:58 -- target/tls.sh@199 -- # killprocess 78128 00:14:24.259 16:25:58 -- common/autotest_common.sh@936 -- # '[' -z 78128 ']' 00:14:24.259 16:25:58 -- common/autotest_common.sh@940 -- # kill -0 78128 00:14:24.259 16:25:58 -- common/autotest_common.sh@941 -- # uname 00:14:24.259 16:25:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:24.259 16:25:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78128 00:14:24.259 killing process with pid 78128 00:14:24.259 Received shutdown signal, test time was about 10.000000 seconds 00:14:24.259 00:14:24.259 Latency(us) 00:14:24.259 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:24.259 =================================================================================================================== 00:14:24.259 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:24.259 16:25:58 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:24.259 16:25:58 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:24.259 16:25:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78128' 00:14:24.259 16:25:58 -- common/autotest_common.sh@955 -- # kill 78128 00:14:24.259 [2024-04-17 16:25:58.294019] app.c: 930:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:24.259 16:25:58 -- common/autotest_common.sh@960 -- # wait 78128 00:14:24.517 16:25:58 -- target/tls.sh@200 -- # killprocess 78024 00:14:24.517 16:25:58 -- common/autotest_common.sh@936 -- # '[' -z 78024 ']' 00:14:24.517 16:25:58 -- common/autotest_common.sh@940 -- # kill -0 78024 00:14:24.517 16:25:58 -- common/autotest_common.sh@941 -- # uname 00:14:24.775 16:25:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:24.775 16:25:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78024 00:14:24.775 killing process with pid 78024 00:14:24.775 16:25:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:24.775 16:25:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:24.775 16:25:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78024' 00:14:24.775 16:25:58 -- common/autotest_common.sh@955 -- # kill 78024 00:14:24.775 [2024-04-17 16:25:58.585895] app.c: 930:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:24.775 16:25:58 -- common/autotest_common.sh@960 -- # wait 78024 00:14:25.033 16:25:58 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:14:25.033 16:25:58 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:25.033 16:25:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:25.033 16:25:58 -- common/autotest_common.sh@10 -- # set +x 00:14:25.033 16:25:58 -- target/tls.sh@203 -- # echo '{ 00:14:25.033 "subsystems": [ 00:14:25.033 { 00:14:25.033 "subsystem": "keyring", 00:14:25.033 "config": [] 00:14:25.033 }, 00:14:25.033 { 00:14:25.033 "subsystem": "iobuf", 00:14:25.033 "config": [ 00:14:25.033 { 00:14:25.033 "method": "iobuf_set_options", 00:14:25.033 "params": { 00:14:25.033 "large_bufsize": 135168, 00:14:25.033 "large_pool_count": 1024, 00:14:25.034 "small_bufsize": 8192, 00:14:25.034 "small_pool_count": 8192 00:14:25.034 } 00:14:25.034 } 00:14:25.034 ] 00:14:25.034 }, 00:14:25.034 { 00:14:25.034 "subsystem": "sock", 00:14:25.034 "config": [ 00:14:25.034 { 00:14:25.034 "method": "sock_impl_set_options", 00:14:25.034 "params": { 00:14:25.034 "enable_ktls": false, 00:14:25.034 "enable_placement_id": 0, 00:14:25.034 "enable_quickack": false, 00:14:25.034 "enable_recv_pipe": true, 00:14:25.034 "enable_zerocopy_send_client": false, 00:14:25.034 "enable_zerocopy_send_server": true, 00:14:25.034 "impl_name": "posix", 00:14:25.034 "recv_buf_size": 2097152, 00:14:25.034 "send_buf_size": 2097152, 00:14:25.034 "tls_version": 0, 00:14:25.034 "zerocopy_threshold": 0 00:14:25.034 } 00:14:25.034 }, 00:14:25.034 { 00:14:25.034 "method": "sock_impl_set_options", 00:14:25.034 "params": { 00:14:25.034 "enable_ktls": false, 00:14:25.034 "enable_placement_id": 0, 00:14:25.034 "enable_quickack": false, 00:14:25.034 "enable_recv_pipe": true, 00:14:25.034 "enable_zerocopy_send_client": false, 00:14:25.034 "enable_zerocopy_send_server": true, 00:14:25.034 "impl_name": "ssl", 00:14:25.034 "recv_buf_size": 4096, 00:14:25.034 "send_buf_size": 4096, 00:14:25.034 "tls_version": 0, 00:14:25.034 "zerocopy_threshold": 0 00:14:25.034 } 00:14:25.034 } 00:14:25.034 ] 00:14:25.034 }, 00:14:25.034 { 00:14:25.034 "subsystem": "vmd", 00:14:25.034 "config": [] 00:14:25.034 }, 00:14:25.034 { 00:14:25.034 "subsystem": "accel", 00:14:25.034 "config": [ 00:14:25.034 { 00:14:25.034 "method": "accel_set_options", 00:14:25.034 "params": { 00:14:25.034 "buf_count": 2048, 00:14:25.034 "large_cache_size": 16, 00:14:25.034 "sequence_count": 2048, 00:14:25.034 "small_cache_size": 128, 00:14:25.034 "task_count": 2048 00:14:25.034 } 00:14:25.034 } 00:14:25.034 ] 00:14:25.034 }, 00:14:25.034 { 00:14:25.034 "subsystem": "bdev", 00:14:25.034 "config": [ 00:14:25.034 { 00:14:25.034 "method": "bdev_set_options", 00:14:25.034 "params": { 00:14:25.034 "bdev_auto_examine": true, 00:14:25.034 "bdev_io_cache_size": 256, 00:14:25.034 "bdev_io_pool_size": 65535, 00:14:25.034 "iobuf_large_cache_size": 16, 00:14:25.034 "iobuf_small_cache_size": 128 00:14:25.034 } 00:14:25.034 }, 00:14:25.034 { 00:14:25.034 "method": "bdev_raid_set_options", 00:14:25.034 "params": { 00:14:25.034 "process_window_size_kb": 1024 00:14:25.034 } 00:14:25.034 }, 00:14:25.034 { 00:14:25.034 "method": "bdev_iscsi_set_options", 00:14:25.034 "params": { 00:14:25.034 "timeout_sec": 30 00:14:25.034 } 00:14:25.034 }, 00:14:25.034 { 00:14:25.034 "method": "bdev_nvme_set_options", 00:14:25.034 "params": { 00:14:25.034 "action_on_timeout": "none", 00:14:25.034 "allow_accel_sequence": false, 00:14:25.034 "arbitration_burst": 0, 00:14:25.034 "bdev_retry_count": 3, 00:14:25.034 "ctrlr_loss_timeout_sec": 0, 00:14:25.034 "delay_cmd_submit": true, 00:14:25.034 "dhchap_dhgroups": [ 00:14:25.034 "null", 00:14:25.034 "ffdhe2048", 00:14:25.034 "ffdhe3072", 00:14:25.034 "ffdhe4096", 00:14:25.034 "ffdhe6144", 00:14:25.034 "ffdhe8192" 00:14:25.034 ], 00:14:25.034 "dhchap_digests": [ 00:14:25.034 "sha256", 00:14:25.034 "sha384", 00:14:25.034 "sha512" 00:14:25.034 ], 00:14:25.034 "disable_auto_failback": false, 00:14:25.034 "fast_io_fail_timeout_sec": 0, 00:14:25.034 "generate_uuids": false, 00:14:25.034 "high_priority_weight": 0, 00:14:25.034 "io_path_stat": false, 00:14:25.034 "io_queue_requests": 0, 00:14:25.034 "keep_alive_timeout_ms": 10000, 00:14:25.034 "low_priority_weight": 0, 00:14:25.034 "medium_priority_weight": 0, 00:14:25.034 "nvme_adminq_poll_period_us": 10000, 00:14:25.034 "nvme_error_stat": false, 00:14:25.034 "nvme_ioq_poll_period_us": 0, 00:14:25.034 "rdma_cm_event_timeout_ms": 0, 00:14:25.034 "rdma_max_cq_size": 0, 00:14:25.034 "rdma_srq_size": 0, 00:14:25.034 "reconnect_delay_sec": 0, 00:14:25.034 "timeout_admin_us": 0, 00:14:25.034 "timeout_us": 0, 00:14:25.034 "transport_ack_timeout": 0, 00:14:25.034 "transport_retry_count": 4, 00:14:25.034 "transport_tos": 0 00:14:25.034 } 00:14:25.034 }, 00:14:25.034 { 00:14:25.034 "method": "bdev_nvme_set_hotplug", 00:14:25.034 "params": { 00:14:25.034 "enable": false, 00:14:25.034 "period_us": 100000 00:14:25.034 } 00:14:25.034 }, 00:14:25.034 { 00:14:25.034 "method": "bdev_malloc_create", 00:14:25.034 "params": { 00:14:25.034 "block_size": 4096, 00:14:25.034 "name": "malloc0", 00:14:25.034 "num_blocks": 8192, 00:14:25.034 "optimal_io_boundary": 0, 00:14:25.034 "physical_block_size": 4096, 00:14:25.034 "uuid": "7c66b64e-7afb-4a25-ac88-5bd1d62a2b84" 00:14:25.034 } 00:14:25.034 }, 00:14:25.034 { 00:14:25.034 "method": "bdev_wait_for_examine" 00:14:25.034 } 00:14:25.034 ] 00:14:25.034 }, 00:14:25.034 { 00:14:25.034 "subsystem": "nbd", 00:14:25.034 "config": [] 00:14:25.034 }, 00:14:25.034 { 00:14:25.034 "subsystem": "scheduler", 00:14:25.034 "config": [ 00:14:25.034 { 00:14:25.034 "method": "framework_set_scheduler", 00:14:25.034 "params": { 00:14:25.034 "name": "static" 00:14:25.034 } 00:14:25.034 } 00:14:25.034 ] 00:14:25.034 }, 00:14:25.034 { 00:14:25.034 "subsystem": "nvmf", 00:14:25.034 "config": [ 00:14:25.034 { 00:14:25.034 "method": "nvmf_set_config", 00:14:25.034 "params": { 00:14:25.034 "admin_cmd_passthru": { 00:14:25.034 "identify_ctrlr": false 00:14:25.034 }, 00:14:25.034 "discovery_filter": "match_any" 00:14:25.034 } 00:14:25.034 }, 00:14:25.034 { 00:14:25.034 "method": "nvmf_set_max_subsystems", 00:14:25.034 "params": { 00:14:25.034 "max_subsystems": 1024 00:14:25.034 } 00:14:25.034 }, 00:14:25.034 { 00:14:25.034 "method": "nvmf_set_crdt", 00:14:25.034 "params": { 00:14:25.034 "crdt1": 0, 00:14:25.034 "crdt2": 0, 00:14:25.034 "crdt3": 0 00:14:25.034 } 00:14:25.034 }, 00:14:25.034 { 00:14:25.034 "method": "nvmf_create_transport", 00:14:25.034 "params": { 00:14:25.034 "abort_timeout_sec": 1, 00:14:25.034 "ack_timeout": 0, 00:14:25.034 "buf_cache_size": 4294967295, 00:14:25.034 "c2h_success": false, 00:14:25.034 "dif_insert_or_strip": false, 00:14:25.034 "in_capsule_data_size": 4096, 00:14:25.034 "io_unit_size": 131072, 00:14:25.034 "max_aq_depth": 128, 00:14:25.034 "max_io_qpairs_per_ctrlr": 127, 00:14:25.034 "max_io_size": 131072, 00:14:25.034 "max_queue_depth": 128, 00:14:25.034 "num_shared_buffers": 511, 00:14:25.034 "sock_priority": 0, 00:14:25.034 "trtype": "TCP", 00:14:25.034 "zcopy": false 00:14:25.034 } 00:14:25.034 }, 00:14:25.034 { 00:14:25.034 "method": "nvmf_create_subsystem", 00:14:25.034 "params": { 00:14:25.034 "allow_any_host": false, 00:14:25.034 "ana_reporting": false, 00:14:25.034 "max_cntlid": 65519, 00:14:25.034 "max_namespaces": 10, 00:14:25.034 "min_cntlid": 1, 00:14:25.034 "model_number": "SPDK bdev Controller", 00:14:25.034 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:25.034 "serial_number": "SPDK00000000000001" 00:14:25.034 } 00:14:25.034 }, 00:14:25.034 { 00:14:25.034 "method": "nvmf_subsystem_add_host", 00:14:25.034 "params": { 00:14:25.035 "host": "nqn.2016-06.io.spdk:host1", 00:14:25.035 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:25.035 "psk": "/tmp/tmp.7ZxTJDjXUS" 00:14:25.035 } 00:14:25.035 }, 00:14:25.035 { 00:14:25.035 "method": "nvmf_subsystem_add_ns", 00:14:25.035 "params": { 00:14:25.035 "namespace": { 00:14:25.035 "bdev_name": "malloc0", 00:14:25.035 "nguid": "7C66B64E7AFB4A25AC885BD1D62A2B84", 00:14:25.035 "no_auto_visible": false, 00:14:25.035 "nsid": 1, 00:14:25.035 "uuid": "7c66b64e-7afb-4a25-ac88-5bd1d62a2b84" 00:14:25.035 }, 00:14:25.035 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:14:25.035 } 00:14:25.035 }, 00:14:25.035 { 00:14:25.035 "method": "nvmf_subsystem_add_listener", 00:14:25.035 "params": { 00:14:25.035 "listen_address": { 00:14:25.035 "adrfam": "IPv4", 00:14:25.035 "traddr": "10.0.0.2", 00:14:25.035 "trsvcid": "4420", 00:14:25.035 "trtype": "TCP" 00:14:25.035 }, 00:14:25.035 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:25.035 "secure_channel": true 00:14:25.035 } 00:14:25.035 } 00:14:25.035 ] 00:14:25.035 } 00:14:25.035 ] 00:14:25.035 }' 00:14:25.035 16:25:58 -- nvmf/common.sh@470 -- # nvmfpid=78202 00:14:25.035 16:25:58 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:14:25.035 16:25:58 -- nvmf/common.sh@471 -- # waitforlisten 78202 00:14:25.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.035 16:25:58 -- common/autotest_common.sh@817 -- # '[' -z 78202 ']' 00:14:25.035 16:25:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.035 16:25:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:25.035 16:25:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.035 16:25:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:25.035 16:25:58 -- common/autotest_common.sh@10 -- # set +x 00:14:25.035 [2024-04-17 16:25:58.918855] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:14:25.035 [2024-04-17 16:25:58.918956] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:25.035 [2024-04-17 16:25:59.056381] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.293 [2024-04-17 16:25:59.176263] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:25.293 [2024-04-17 16:25:59.176334] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:25.293 [2024-04-17 16:25:59.176346] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:25.293 [2024-04-17 16:25:59.176355] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:25.293 [2024-04-17 16:25:59.176363] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:25.293 [2024-04-17 16:25:59.176465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:25.551 [2024-04-17 16:25:59.401295] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:25.551 [2024-04-17 16:25:59.417251] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:25.551 [2024-04-17 16:25:59.433242] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:25.551 [2024-04-17 16:25:59.433519] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:26.117 16:25:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:26.117 16:25:59 -- common/autotest_common.sh@850 -- # return 0 00:14:26.117 16:25:59 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:26.117 16:25:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:26.117 16:25:59 -- common/autotest_common.sh@10 -- # set +x 00:14:26.117 16:25:59 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:26.117 16:25:59 -- target/tls.sh@207 -- # bdevperf_pid=78246 00:14:26.117 16:25:59 -- target/tls.sh@208 -- # waitforlisten 78246 /var/tmp/bdevperf.sock 00:14:26.117 16:25:59 -- common/autotest_common.sh@817 -- # '[' -z 78246 ']' 00:14:26.117 16:25:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:26.117 16:25:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:26.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:26.117 16:25:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:26.117 16:25:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:26.117 16:25:59 -- common/autotest_common.sh@10 -- # set +x 00:14:26.117 16:25:59 -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:14:26.117 16:25:59 -- target/tls.sh@204 -- # echo '{ 00:14:26.117 "subsystems": [ 00:14:26.117 { 00:14:26.117 "subsystem": "keyring", 00:14:26.117 "config": [] 00:14:26.117 }, 00:14:26.117 { 00:14:26.117 "subsystem": "iobuf", 00:14:26.117 "config": [ 00:14:26.117 { 00:14:26.117 "method": "iobuf_set_options", 00:14:26.117 "params": { 00:14:26.117 "large_bufsize": 135168, 00:14:26.117 "large_pool_count": 1024, 00:14:26.117 "small_bufsize": 8192, 00:14:26.117 "small_pool_count": 8192 00:14:26.117 } 00:14:26.117 } 00:14:26.117 ] 00:14:26.117 }, 00:14:26.117 { 00:14:26.117 "subsystem": "sock", 00:14:26.117 "config": [ 00:14:26.117 { 00:14:26.117 "method": "sock_impl_set_options", 00:14:26.117 "params": { 00:14:26.117 "enable_ktls": false, 00:14:26.117 "enable_placement_id": 0, 00:14:26.117 "enable_quickack": false, 00:14:26.117 "enable_recv_pipe": true, 00:14:26.117 "enable_zerocopy_send_client": false, 00:14:26.117 "enable_zerocopy_send_server": true, 00:14:26.117 "impl_name": "posix", 00:14:26.117 "recv_buf_size": 2097152, 00:14:26.117 "send_buf_size": 2097152, 00:14:26.117 "tls_version": 0, 00:14:26.117 "zerocopy_threshold": 0 00:14:26.117 } 00:14:26.117 }, 00:14:26.117 { 00:14:26.117 "method": "sock_impl_set_options", 00:14:26.117 "params": { 00:14:26.117 "enable_ktls": false, 00:14:26.117 "enable_placement_id": 0, 00:14:26.117 "enable_quickack": false, 00:14:26.117 "enable_recv_pipe": true, 00:14:26.117 "enable_zerocopy_send_client": false, 00:14:26.117 "enable_zerocopy_send_server": true, 00:14:26.117 "impl_name": "ssl", 00:14:26.117 "recv_buf_size": 4096, 00:14:26.117 "send_buf_size": 4096, 00:14:26.117 "tls_version": 0, 00:14:26.117 "zerocopy_threshold": 0 00:14:26.117 } 00:14:26.117 } 00:14:26.117 ] 00:14:26.117 }, 00:14:26.117 { 00:14:26.117 "subsystem": "vmd", 00:14:26.117 "config": [] 00:14:26.117 }, 00:14:26.117 { 00:14:26.117 "subsystem": "accel", 00:14:26.117 "config": [ 00:14:26.117 { 00:14:26.117 "method": "accel_set_options", 00:14:26.117 "params": { 00:14:26.117 "buf_count": 2048, 00:14:26.117 "large_cache_size": 16, 00:14:26.117 "sequence_count": 2048, 00:14:26.117 "small_cache_size": 128, 00:14:26.117 "task_count": 2048 00:14:26.117 } 00:14:26.117 } 00:14:26.117 ] 00:14:26.117 }, 00:14:26.117 { 00:14:26.117 "subsystem": "bdev", 00:14:26.117 "config": [ 00:14:26.117 { 00:14:26.117 "method": "bdev_set_options", 00:14:26.117 "params": { 00:14:26.117 "bdev_auto_examine": true, 00:14:26.117 "bdev_io_cache_size": 256, 00:14:26.117 "bdev_io_pool_size": 65535, 00:14:26.117 "iobuf_large_cache_size": 16, 00:14:26.117 "iobuf_small_cache_size": 128 00:14:26.117 } 00:14:26.117 }, 00:14:26.117 { 00:14:26.117 "method": "bdev_raid_set_options", 00:14:26.117 "params": { 00:14:26.117 "process_window_size_kb": 1024 00:14:26.117 } 00:14:26.117 }, 00:14:26.117 { 00:14:26.117 "method": "bdev_iscsi_set_options", 00:14:26.117 "params": { 00:14:26.117 "timeout_sec": 30 00:14:26.117 } 00:14:26.117 }, 00:14:26.117 { 00:14:26.117 "method": "bdev_nvme_set_options", 00:14:26.117 "params": { 00:14:26.117 "action_on_timeout": "none", 00:14:26.117 "allow_accel_sequence": false, 00:14:26.117 "arbitration_burst": 0, 00:14:26.117 "bdev_retry_count": 3, 00:14:26.117 "ctrlr_loss_timeout_sec": 0, 00:14:26.117 "delay_cmd_submit": true, 00:14:26.118 "dhchap_dhgroups": [ 00:14:26.118 "null", 00:14:26.118 "ffdhe2048", 00:14:26.118 "ffdhe3072", 00:14:26.118 "ffdhe4096", 00:14:26.118 "ffdhe6144", 00:14:26.118 "ffdhe8192" 00:14:26.118 ], 00:14:26.118 "dhchap_digests": [ 00:14:26.118 "sha256", 00:14:26.118 "sha384", 00:14:26.118 "sha512" 00:14:26.118 ], 00:14:26.118 "disable_auto_failback": false, 00:14:26.118 "fast_io_fail_timeout_sec": 0, 00:14:26.118 "generate_uuids": false, 00:14:26.118 "high_priority_weight": 0, 00:14:26.118 "io_path_stat": false, 00:14:26.118 "io_queue_requests": 512, 00:14:26.118 "keep_alive_timeout_ms": 10000, 00:14:26.118 "low_priority_weight": 0, 00:14:26.118 "medium_priority_weight": 0, 00:14:26.118 "nvme_adminq_poll_period_us": 10000, 00:14:26.118 "nvme_error_stat": false, 00:14:26.118 "nvme_ioq_poll_period_us": 0, 00:14:26.118 "rdma_cm_event_timeout_ms": 0, 00:14:26.118 "rdma_max_cq_size": 0, 00:14:26.118 "rdma_srq_size": 0, 00:14:26.118 "reconnect_delay_sec": 0, 00:14:26.118 "timeout_admin_us": 0, 00:14:26.118 "timeout_us": 0, 00:14:26.118 "transport_ack_timeout": 0, 00:14:26.118 "transport_retry_count": 4, 00:14:26.118 "transport_tos": 0 00:14:26.118 } 00:14:26.118 }, 00:14:26.118 { 00:14:26.118 "method": "bdev_nvme_attach_controller", 00:14:26.118 "params": { 00:14:26.118 "adrfam": "IPv4", 00:14:26.118 "ctrlr_loss_timeout_sec": 0, 00:14:26.118 "ddgst": false, 00:14:26.118 "fast_io_fail_timeout_sec": 0, 00:14:26.118 "hdgst": false, 00:14:26.118 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:26.118 "name": "TLSTEST", 00:14:26.118 "prchk_guard": false, 00:14:26.118 "prchk_reftag": false, 00:14:26.118 "psk": "/tmp/tmp.7ZxTJDjXUS", 00:14:26.118 "reconnect_delay_sec": 0, 00:14:26.118 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:26.118 "traddr": "10.0.0.2", 00:14:26.118 "trsvcid": "4420", 00:14:26.118 "trtype": "TCP" 00:14:26.118 } 00:14:26.118 }, 00:14:26.118 { 00:14:26.118 "method": "bdev_nvme_set_hotplug", 00:14:26.118 "params": { 00:14:26.118 "enable": false, 00:14:26.118 "period_us": 100000 00:14:26.118 } 00:14:26.118 }, 00:14:26.118 { 00:14:26.118 "method": "bdev_wait_for_examine" 00:14:26.118 } 00:14:26.118 ] 00:14:26.118 }, 00:14:26.118 { 00:14:26.118 "subsystem": "nbd", 00:14:26.118 "config": [] 00:14:26.118 } 00:14:26.118 ] 00:14:26.118 }' 00:14:26.118 [2024-04-17 16:25:59.989341] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:14:26.118 [2024-04-17 16:25:59.989443] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78246 ] 00:14:26.118 [2024-04-17 16:26:00.128890] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.375 [2024-04-17 16:26:00.260583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:26.633 [2024-04-17 16:26:00.422839] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:26.633 [2024-04-17 16:26:00.422969] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:27.208 16:26:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:27.208 16:26:00 -- common/autotest_common.sh@850 -- # return 0 00:14:27.208 16:26:00 -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:27.208 Running I/O for 10 seconds... 00:14:37.179 00:14:37.179 Latency(us) 00:14:37.179 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:37.179 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:37.179 Verification LBA range: start 0x0 length 0x2000 00:14:37.179 TLSTESTn1 : 10.02 3749.38 14.65 0.00 0.00 34070.44 8281.37 38844.97 00:14:37.179 =================================================================================================================== 00:14:37.179 Total : 3749.38 14.65 0.00 0.00 34070.44 8281.37 38844.97 00:14:37.179 0 00:14:37.179 16:26:11 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:37.179 16:26:11 -- target/tls.sh@214 -- # killprocess 78246 00:14:37.179 16:26:11 -- common/autotest_common.sh@936 -- # '[' -z 78246 ']' 00:14:37.179 16:26:11 -- common/autotest_common.sh@940 -- # kill -0 78246 00:14:37.179 16:26:11 -- common/autotest_common.sh@941 -- # uname 00:14:37.179 16:26:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:37.179 16:26:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78246 00:14:37.179 16:26:11 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:37.179 16:26:11 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:37.179 killing process with pid 78246 00:14:37.179 16:26:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78246' 00:14:37.179 Received shutdown signal, test time was about 10.000000 seconds 00:14:37.179 00:14:37.179 Latency(us) 00:14:37.179 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:37.179 =================================================================================================================== 00:14:37.179 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:37.179 16:26:11 -- common/autotest_common.sh@955 -- # kill 78246 00:14:37.179 [2024-04-17 16:26:11.098523] app.c: 930:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:37.179 16:26:11 -- common/autotest_common.sh@960 -- # wait 78246 00:14:37.447 16:26:11 -- target/tls.sh@215 -- # killprocess 78202 00:14:37.447 16:26:11 -- common/autotest_common.sh@936 -- # '[' -z 78202 ']' 00:14:37.448 16:26:11 -- common/autotest_common.sh@940 -- # kill -0 78202 00:14:37.448 16:26:11 -- common/autotest_common.sh@941 -- # uname 00:14:37.448 16:26:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:37.448 16:26:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78202 00:14:37.448 16:26:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:37.448 killing process with pid 78202 00:14:37.448 16:26:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:37.448 16:26:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78202' 00:14:37.448 16:26:11 -- common/autotest_common.sh@955 -- # kill 78202 00:14:37.448 [2024-04-17 16:26:11.377241] app.c: 930:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:37.448 16:26:11 -- common/autotest_common.sh@960 -- # wait 78202 00:14:37.711 16:26:11 -- target/tls.sh@218 -- # nvmfappstart 00:14:37.711 16:26:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:37.711 16:26:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:37.711 16:26:11 -- common/autotest_common.sh@10 -- # set +x 00:14:37.711 16:26:11 -- nvmf/common.sh@470 -- # nvmfpid=78397 00:14:37.711 16:26:11 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:37.711 16:26:11 -- nvmf/common.sh@471 -- # waitforlisten 78397 00:14:37.711 16:26:11 -- common/autotest_common.sh@817 -- # '[' -z 78397 ']' 00:14:37.711 16:26:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.711 16:26:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:37.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.712 16:26:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.712 16:26:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:37.712 16:26:11 -- common/autotest_common.sh@10 -- # set +x 00:14:37.712 [2024-04-17 16:26:11.705989] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:14:37.712 [2024-04-17 16:26:11.706096] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:37.970 [2024-04-17 16:26:11.842217] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.970 [2024-04-17 16:26:11.982454] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:37.970 [2024-04-17 16:26:11.982516] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:37.970 [2024-04-17 16:26:11.982528] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:37.970 [2024-04-17 16:26:11.982537] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:37.970 [2024-04-17 16:26:11.982544] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:37.970 [2024-04-17 16:26:11.982578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.904 16:26:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:38.904 16:26:12 -- common/autotest_common.sh@850 -- # return 0 00:14:38.904 16:26:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:38.904 16:26:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:38.904 16:26:12 -- common/autotest_common.sh@10 -- # set +x 00:14:38.904 16:26:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:38.904 16:26:12 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.7ZxTJDjXUS 00:14:38.904 16:26:12 -- target/tls.sh@49 -- # local key=/tmp/tmp.7ZxTJDjXUS 00:14:38.904 16:26:12 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:38.905 [2024-04-17 16:26:12.932622] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:39.163 16:26:12 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:39.163 16:26:13 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:39.421 [2024-04-17 16:26:13.420720] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:39.421 [2024-04-17 16:26:13.420983] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:39.421 16:26:13 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:39.679 malloc0 00:14:39.679 16:26:13 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:39.936 16:26:13 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7ZxTJDjXUS 00:14:40.195 [2024-04-17 16:26:14.152412] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:40.195 16:26:14 -- target/tls.sh@222 -- # bdevperf_pid=78500 00:14:40.195 16:26:14 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:40.195 16:26:14 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:40.195 16:26:14 -- target/tls.sh@225 -- # waitforlisten 78500 /var/tmp/bdevperf.sock 00:14:40.195 16:26:14 -- common/autotest_common.sh@817 -- # '[' -z 78500 ']' 00:14:40.195 16:26:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:40.195 16:26:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:40.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:40.195 16:26:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:40.195 16:26:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:40.195 16:26:14 -- common/autotest_common.sh@10 -- # set +x 00:14:40.195 [2024-04-17 16:26:14.231456] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:14:40.195 [2024-04-17 16:26:14.231572] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78500 ] 00:14:40.453 [2024-04-17 16:26:14.371962] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.711 [2024-04-17 16:26:14.505894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:41.277 16:26:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:41.277 16:26:15 -- common/autotest_common.sh@850 -- # return 0 00:14:41.277 16:26:15 -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.7ZxTJDjXUS 00:14:41.535 16:26:15 -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:41.793 [2024-04-17 16:26:15.762723] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:42.051 nvme0n1 00:14:42.051 16:26:15 -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:42.051 Running I/O for 1 seconds... 00:14:42.985 00:14:42.985 Latency(us) 00:14:42.985 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.985 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:42.985 Verification LBA range: start 0x0 length 0x2000 00:14:42.985 nvme0n1 : 1.02 3647.31 14.25 0.00 0.00 34674.68 370.50 24427.05 00:14:42.985 =================================================================================================================== 00:14:42.985 Total : 3647.31 14.25 0.00 0.00 34674.68 370.50 24427.05 00:14:42.985 0 00:14:42.985 16:26:17 -- target/tls.sh@234 -- # killprocess 78500 00:14:42.985 16:26:17 -- common/autotest_common.sh@936 -- # '[' -z 78500 ']' 00:14:42.985 16:26:17 -- common/autotest_common.sh@940 -- # kill -0 78500 00:14:42.985 16:26:17 -- common/autotest_common.sh@941 -- # uname 00:14:42.985 16:26:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:42.985 16:26:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78500 00:14:43.243 killing process with pid 78500 00:14:43.243 Received shutdown signal, test time was about 1.000000 seconds 00:14:43.243 00:14:43.243 Latency(us) 00:14:43.243 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.243 =================================================================================================================== 00:14:43.243 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:43.243 16:26:17 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:43.243 16:26:17 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:43.243 16:26:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78500' 00:14:43.243 16:26:17 -- common/autotest_common.sh@955 -- # kill 78500 00:14:43.243 16:26:17 -- common/autotest_common.sh@960 -- # wait 78500 00:14:43.501 16:26:17 -- target/tls.sh@235 -- # killprocess 78397 00:14:43.501 16:26:17 -- common/autotest_common.sh@936 -- # '[' -z 78397 ']' 00:14:43.501 16:26:17 -- common/autotest_common.sh@940 -- # kill -0 78397 00:14:43.501 16:26:17 -- common/autotest_common.sh@941 -- # uname 00:14:43.501 16:26:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:43.501 16:26:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78397 00:14:43.501 killing process with pid 78397 00:14:43.501 16:26:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:43.501 16:26:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:43.501 16:26:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78397' 00:14:43.501 16:26:17 -- common/autotest_common.sh@955 -- # kill 78397 00:14:43.502 [2024-04-17 16:26:17.335534] app.c: 930:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:43.502 16:26:17 -- common/autotest_common.sh@960 -- # wait 78397 00:14:43.760 16:26:17 -- target/tls.sh@238 -- # nvmfappstart 00:14:43.760 16:26:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:43.760 16:26:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:43.760 16:26:17 -- common/autotest_common.sh@10 -- # set +x 00:14:43.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.760 16:26:17 -- nvmf/common.sh@470 -- # nvmfpid=78581 00:14:43.760 16:26:17 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:43.760 16:26:17 -- nvmf/common.sh@471 -- # waitforlisten 78581 00:14:43.760 16:26:17 -- common/autotest_common.sh@817 -- # '[' -z 78581 ']' 00:14:43.760 16:26:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.760 16:26:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:43.760 16:26:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.760 16:26:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:43.760 16:26:17 -- common/autotest_common.sh@10 -- # set +x 00:14:43.760 [2024-04-17 16:26:17.679930] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:14:43.760 [2024-04-17 16:26:17.680037] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:44.019 [2024-04-17 16:26:17.817832] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.019 [2024-04-17 16:26:17.937287] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:44.019 [2024-04-17 16:26:17.937346] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:44.019 [2024-04-17 16:26:17.937359] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:44.019 [2024-04-17 16:26:17.937368] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:44.019 [2024-04-17 16:26:17.937376] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:44.019 [2024-04-17 16:26:17.937410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.953 16:26:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:44.953 16:26:18 -- common/autotest_common.sh@850 -- # return 0 00:14:44.953 16:26:18 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:44.953 16:26:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:44.953 16:26:18 -- common/autotest_common.sh@10 -- # set +x 00:14:44.953 16:26:18 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:44.953 16:26:18 -- target/tls.sh@239 -- # rpc_cmd 00:14:44.953 16:26:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:44.953 16:26:18 -- common/autotest_common.sh@10 -- # set +x 00:14:44.953 [2024-04-17 16:26:18.709860] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:44.953 malloc0 00:14:44.953 [2024-04-17 16:26:18.741164] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:44.953 [2024-04-17 16:26:18.741400] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:44.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:44.953 16:26:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:44.953 16:26:18 -- target/tls.sh@252 -- # bdevperf_pid=78631 00:14:44.953 16:26:18 -- target/tls.sh@254 -- # waitforlisten 78631 /var/tmp/bdevperf.sock 00:14:44.953 16:26:18 -- common/autotest_common.sh@817 -- # '[' -z 78631 ']' 00:14:44.953 16:26:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:44.953 16:26:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:44.953 16:26:18 -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:44.953 16:26:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:44.953 16:26:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:44.953 16:26:18 -- common/autotest_common.sh@10 -- # set +x 00:14:44.953 [2024-04-17 16:26:18.861899] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:14:44.954 [2024-04-17 16:26:18.862092] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78631 ] 00:14:45.211 [2024-04-17 16:26:19.019424] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.211 [2024-04-17 16:26:19.149342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.145 16:26:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:46.145 16:26:19 -- common/autotest_common.sh@850 -- # return 0 00:14:46.145 16:26:19 -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.7ZxTJDjXUS 00:14:46.145 16:26:20 -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:46.405 [2024-04-17 16:26:20.327580] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:46.405 nvme0n1 00:14:46.405 16:26:20 -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:46.664 Running I/O for 1 seconds... 00:14:47.599 00:14:47.599 Latency(us) 00:14:47.599 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:47.599 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:47.599 Verification LBA range: start 0x0 length 0x2000 00:14:47.599 nvme0n1 : 1.02 3666.93 14.32 0.00 0.00 34391.04 4498.15 25022.84 00:14:47.599 =================================================================================================================== 00:14:47.599 Total : 3666.93 14.32 0.00 0.00 34391.04 4498.15 25022.84 00:14:47.599 0 00:14:47.599 16:26:21 -- target/tls.sh@263 -- # rpc_cmd save_config 00:14:47.599 16:26:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:47.599 16:26:21 -- common/autotest_common.sh@10 -- # set +x 00:14:47.857 16:26:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:47.857 16:26:21 -- target/tls.sh@263 -- # tgtcfg='{ 00:14:47.857 "subsystems": [ 00:14:47.857 { 00:14:47.857 "subsystem": "keyring", 00:14:47.857 "config": [ 00:14:47.857 { 00:14:47.857 "method": "keyring_file_add_key", 00:14:47.857 "params": { 00:14:47.857 "name": "key0", 00:14:47.857 "path": "/tmp/tmp.7ZxTJDjXUS" 00:14:47.857 } 00:14:47.857 } 00:14:47.857 ] 00:14:47.857 }, 00:14:47.857 { 00:14:47.857 "subsystem": "iobuf", 00:14:47.857 "config": [ 00:14:47.857 { 00:14:47.857 "method": "iobuf_set_options", 00:14:47.857 "params": { 00:14:47.857 "large_bufsize": 135168, 00:14:47.857 "large_pool_count": 1024, 00:14:47.857 "small_bufsize": 8192, 00:14:47.857 "small_pool_count": 8192 00:14:47.857 } 00:14:47.857 } 00:14:47.857 ] 00:14:47.857 }, 00:14:47.857 { 00:14:47.857 "subsystem": "sock", 00:14:47.857 "config": [ 00:14:47.857 { 00:14:47.857 "method": "sock_impl_set_options", 00:14:47.857 "params": { 00:14:47.857 "enable_ktls": false, 00:14:47.857 "enable_placement_id": 0, 00:14:47.857 "enable_quickack": false, 00:14:47.857 "enable_recv_pipe": true, 00:14:47.857 "enable_zerocopy_send_client": false, 00:14:47.857 "enable_zerocopy_send_server": true, 00:14:47.857 "impl_name": "posix", 00:14:47.857 "recv_buf_size": 2097152, 00:14:47.857 "send_buf_size": 2097152, 00:14:47.857 "tls_version": 0, 00:14:47.857 "zerocopy_threshold": 0 00:14:47.857 } 00:14:47.857 }, 00:14:47.857 { 00:14:47.857 "method": "sock_impl_set_options", 00:14:47.857 "params": { 00:14:47.857 "enable_ktls": false, 00:14:47.857 "enable_placement_id": 0, 00:14:47.857 "enable_quickack": false, 00:14:47.857 "enable_recv_pipe": true, 00:14:47.857 "enable_zerocopy_send_client": false, 00:14:47.857 "enable_zerocopy_send_server": true, 00:14:47.857 "impl_name": "ssl", 00:14:47.857 "recv_buf_size": 4096, 00:14:47.857 "send_buf_size": 4096, 00:14:47.857 "tls_version": 0, 00:14:47.857 "zerocopy_threshold": 0 00:14:47.857 } 00:14:47.857 } 00:14:47.857 ] 00:14:47.857 }, 00:14:47.857 { 00:14:47.857 "subsystem": "vmd", 00:14:47.857 "config": [] 00:14:47.857 }, 00:14:47.857 { 00:14:47.857 "subsystem": "accel", 00:14:47.857 "config": [ 00:14:47.857 { 00:14:47.857 "method": "accel_set_options", 00:14:47.857 "params": { 00:14:47.857 "buf_count": 2048, 00:14:47.857 "large_cache_size": 16, 00:14:47.857 "sequence_count": 2048, 00:14:47.857 "small_cache_size": 128, 00:14:47.857 "task_count": 2048 00:14:47.857 } 00:14:47.857 } 00:14:47.857 ] 00:14:47.857 }, 00:14:47.857 { 00:14:47.857 "subsystem": "bdev", 00:14:47.857 "config": [ 00:14:47.857 { 00:14:47.857 "method": "bdev_set_options", 00:14:47.857 "params": { 00:14:47.857 "bdev_auto_examine": true, 00:14:47.857 "bdev_io_cache_size": 256, 00:14:47.857 "bdev_io_pool_size": 65535, 00:14:47.857 "iobuf_large_cache_size": 16, 00:14:47.857 "iobuf_small_cache_size": 128 00:14:47.857 } 00:14:47.857 }, 00:14:47.857 { 00:14:47.857 "method": "bdev_raid_set_options", 00:14:47.857 "params": { 00:14:47.857 "process_window_size_kb": 1024 00:14:47.857 } 00:14:47.857 }, 00:14:47.857 { 00:14:47.857 "method": "bdev_iscsi_set_options", 00:14:47.857 "params": { 00:14:47.857 "timeout_sec": 30 00:14:47.857 } 00:14:47.857 }, 00:14:47.857 { 00:14:47.857 "method": "bdev_nvme_set_options", 00:14:47.857 "params": { 00:14:47.857 "action_on_timeout": "none", 00:14:47.857 "allow_accel_sequence": false, 00:14:47.857 "arbitration_burst": 0, 00:14:47.857 "bdev_retry_count": 3, 00:14:47.857 "ctrlr_loss_timeout_sec": 0, 00:14:47.857 "delay_cmd_submit": true, 00:14:47.857 "dhchap_dhgroups": [ 00:14:47.857 "null", 00:14:47.857 "ffdhe2048", 00:14:47.857 "ffdhe3072", 00:14:47.857 "ffdhe4096", 00:14:47.857 "ffdhe6144", 00:14:47.857 "ffdhe8192" 00:14:47.857 ], 00:14:47.857 "dhchap_digests": [ 00:14:47.857 "sha256", 00:14:47.857 "sha384", 00:14:47.857 "sha512" 00:14:47.857 ], 00:14:47.857 "disable_auto_failback": false, 00:14:47.857 "fast_io_fail_timeout_sec": 0, 00:14:47.857 "generate_uuids": false, 00:14:47.857 "high_priority_weight": 0, 00:14:47.857 "io_path_stat": false, 00:14:47.857 "io_queue_requests": 0, 00:14:47.857 "keep_alive_timeout_ms": 10000, 00:14:47.857 "low_priority_weight": 0, 00:14:47.857 "medium_priority_weight": 0, 00:14:47.857 "nvme_adminq_poll_period_us": 10000, 00:14:47.857 "nvme_error_stat": false, 00:14:47.857 "nvme_ioq_poll_period_us": 0, 00:14:47.857 "rdma_cm_event_timeout_ms": 0, 00:14:47.857 "rdma_max_cq_size": 0, 00:14:47.857 "rdma_srq_size": 0, 00:14:47.857 "reconnect_delay_sec": 0, 00:14:47.857 "timeout_admin_us": 0, 00:14:47.857 "timeout_us": 0, 00:14:47.857 "transport_ack_timeout": 0, 00:14:47.857 "transport_retry_count": 4, 00:14:47.857 "transport_tos": 0 00:14:47.857 } 00:14:47.857 }, 00:14:47.857 { 00:14:47.857 "method": "bdev_nvme_set_hotplug", 00:14:47.857 "params": { 00:14:47.857 "enable": false, 00:14:47.857 "period_us": 100000 00:14:47.857 } 00:14:47.857 }, 00:14:47.857 { 00:14:47.857 "method": "bdev_malloc_create", 00:14:47.857 "params": { 00:14:47.857 "block_size": 4096, 00:14:47.858 "name": "malloc0", 00:14:47.858 "num_blocks": 8192, 00:14:47.858 "optimal_io_boundary": 0, 00:14:47.858 "physical_block_size": 4096, 00:14:47.858 "uuid": "f37b5eb2-ad0d-4d2c-8dc2-923f3ae91812" 00:14:47.858 } 00:14:47.858 }, 00:14:47.858 { 00:14:47.858 "method": "bdev_wait_for_examine" 00:14:47.858 } 00:14:47.858 ] 00:14:47.858 }, 00:14:47.858 { 00:14:47.858 "subsystem": "nbd", 00:14:47.858 "config": [] 00:14:47.858 }, 00:14:47.858 { 00:14:47.858 "subsystem": "scheduler", 00:14:47.858 "config": [ 00:14:47.858 { 00:14:47.858 "method": "framework_set_scheduler", 00:14:47.858 "params": { 00:14:47.858 "name": "static" 00:14:47.858 } 00:14:47.858 } 00:14:47.858 ] 00:14:47.858 }, 00:14:47.858 { 00:14:47.858 "subsystem": "nvmf", 00:14:47.858 "config": [ 00:14:47.858 { 00:14:47.858 "method": "nvmf_set_config", 00:14:47.858 "params": { 00:14:47.858 "admin_cmd_passthru": { 00:14:47.858 "identify_ctrlr": false 00:14:47.858 }, 00:14:47.858 "discovery_filter": "match_any" 00:14:47.858 } 00:14:47.858 }, 00:14:47.858 { 00:14:47.858 "method": "nvmf_set_max_subsystems", 00:14:47.858 "params": { 00:14:47.858 "max_subsystems": 1024 00:14:47.858 } 00:14:47.858 }, 00:14:47.858 { 00:14:47.858 "method": "nvmf_set_crdt", 00:14:47.858 "params": { 00:14:47.858 "crdt1": 0, 00:14:47.858 "crdt2": 0, 00:14:47.858 "crdt3": 0 00:14:47.858 } 00:14:47.858 }, 00:14:47.858 { 00:14:47.858 "method": "nvmf_create_transport", 00:14:47.858 "params": { 00:14:47.858 "abort_timeout_sec": 1, 00:14:47.858 "ack_timeout": 0, 00:14:47.858 "buf_cache_size": 4294967295, 00:14:47.858 "c2h_success": false, 00:14:47.858 "dif_insert_or_strip": false, 00:14:47.858 "in_capsule_data_size": 4096, 00:14:47.858 "io_unit_size": 131072, 00:14:47.858 "max_aq_depth": 128, 00:14:47.858 "max_io_qpairs_per_ctrlr": 127, 00:14:47.858 "max_io_size": 131072, 00:14:47.858 "max_queue_depth": 128, 00:14:47.858 "num_shared_buffers": 511, 00:14:47.858 "sock_priority": 0, 00:14:47.858 "trtype": "TCP", 00:14:47.858 "zcopy": false 00:14:47.858 } 00:14:47.858 }, 00:14:47.858 { 00:14:47.858 "method": "nvmf_create_subsystem", 00:14:47.858 "params": { 00:14:47.858 "allow_any_host": false, 00:14:47.858 "ana_reporting": false, 00:14:47.858 "max_cntlid": 65519, 00:14:47.858 "max_namespaces": 32, 00:14:47.858 "min_cntlid": 1, 00:14:47.858 "model_number": "SPDK bdev Controller", 00:14:47.858 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:47.858 "serial_number": "00000000000000000000" 00:14:47.858 } 00:14:47.858 }, 00:14:47.858 { 00:14:47.858 "method": "nvmf_subsystem_add_host", 00:14:47.858 "params": { 00:14:47.858 "host": "nqn.2016-06.io.spdk:host1", 00:14:47.858 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:47.858 "psk": "key0" 00:14:47.858 } 00:14:47.858 }, 00:14:47.858 { 00:14:47.858 "method": "nvmf_subsystem_add_ns", 00:14:47.858 "params": { 00:14:47.858 "namespace": { 00:14:47.858 "bdev_name": "malloc0", 00:14:47.858 "nguid": "F37B5EB2AD0D4D2C8DC2923F3AE91812", 00:14:47.858 "no_auto_visible": false, 00:14:47.858 "nsid": 1, 00:14:47.858 "uuid": "f37b5eb2-ad0d-4d2c-8dc2-923f3ae91812" 00:14:47.858 }, 00:14:47.858 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:14:47.858 } 00:14:47.858 }, 00:14:47.858 { 00:14:47.858 "method": "nvmf_subsystem_add_listener", 00:14:47.858 "params": { 00:14:47.858 "listen_address": { 00:14:47.858 "adrfam": "IPv4", 00:14:47.858 "traddr": "10.0.0.2", 00:14:47.858 "trsvcid": "4420", 00:14:47.858 "trtype": "TCP" 00:14:47.858 }, 00:14:47.858 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:47.858 "secure_channel": true 00:14:47.858 } 00:14:47.858 } 00:14:47.858 ] 00:14:47.858 } 00:14:47.858 ] 00:14:47.858 }' 00:14:47.858 16:26:21 -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:48.117 16:26:22 -- target/tls.sh@264 -- # bperfcfg='{ 00:14:48.117 "subsystems": [ 00:14:48.117 { 00:14:48.117 "subsystem": "keyring", 00:14:48.117 "config": [ 00:14:48.117 { 00:14:48.117 "method": "keyring_file_add_key", 00:14:48.117 "params": { 00:14:48.117 "name": "key0", 00:14:48.117 "path": "/tmp/tmp.7ZxTJDjXUS" 00:14:48.117 } 00:14:48.117 } 00:14:48.117 ] 00:14:48.117 }, 00:14:48.117 { 00:14:48.117 "subsystem": "iobuf", 00:14:48.117 "config": [ 00:14:48.117 { 00:14:48.117 "method": "iobuf_set_options", 00:14:48.117 "params": { 00:14:48.117 "large_bufsize": 135168, 00:14:48.117 "large_pool_count": 1024, 00:14:48.117 "small_bufsize": 8192, 00:14:48.117 "small_pool_count": 8192 00:14:48.117 } 00:14:48.117 } 00:14:48.117 ] 00:14:48.117 }, 00:14:48.117 { 00:14:48.117 "subsystem": "sock", 00:14:48.117 "config": [ 00:14:48.117 { 00:14:48.117 "method": "sock_impl_set_options", 00:14:48.117 "params": { 00:14:48.117 "enable_ktls": false, 00:14:48.117 "enable_placement_id": 0, 00:14:48.117 "enable_quickack": false, 00:14:48.117 "enable_recv_pipe": true, 00:14:48.117 "enable_zerocopy_send_client": false, 00:14:48.117 "enable_zerocopy_send_server": true, 00:14:48.117 "impl_name": "posix", 00:14:48.117 "recv_buf_size": 2097152, 00:14:48.117 "send_buf_size": 2097152, 00:14:48.117 "tls_version": 0, 00:14:48.117 "zerocopy_threshold": 0 00:14:48.117 } 00:14:48.117 }, 00:14:48.117 { 00:14:48.117 "method": "sock_impl_set_options", 00:14:48.117 "params": { 00:14:48.117 "enable_ktls": false, 00:14:48.117 "enable_placement_id": 0, 00:14:48.117 "enable_quickack": false, 00:14:48.117 "enable_recv_pipe": true, 00:14:48.117 "enable_zerocopy_send_client": false, 00:14:48.117 "enable_zerocopy_send_server": true, 00:14:48.117 "impl_name": "ssl", 00:14:48.117 "recv_buf_size": 4096, 00:14:48.117 "send_buf_size": 4096, 00:14:48.117 "tls_version": 0, 00:14:48.117 "zerocopy_threshold": 0 00:14:48.117 } 00:14:48.117 } 00:14:48.117 ] 00:14:48.117 }, 00:14:48.117 { 00:14:48.117 "subsystem": "vmd", 00:14:48.117 "config": [] 00:14:48.117 }, 00:14:48.117 { 00:14:48.117 "subsystem": "accel", 00:14:48.117 "config": [ 00:14:48.117 { 00:14:48.117 "method": "accel_set_options", 00:14:48.118 "params": { 00:14:48.118 "buf_count": 2048, 00:14:48.118 "large_cache_size": 16, 00:14:48.118 "sequence_count": 2048, 00:14:48.118 "small_cache_size": 128, 00:14:48.118 "task_count": 2048 00:14:48.118 } 00:14:48.118 } 00:14:48.118 ] 00:14:48.118 }, 00:14:48.118 { 00:14:48.118 "subsystem": "bdev", 00:14:48.118 "config": [ 00:14:48.118 { 00:14:48.118 "method": "bdev_set_options", 00:14:48.118 "params": { 00:14:48.118 "bdev_auto_examine": true, 00:14:48.118 "bdev_io_cache_size": 256, 00:14:48.118 "bdev_io_pool_size": 65535, 00:14:48.118 "iobuf_large_cache_size": 16, 00:14:48.118 "iobuf_small_cache_size": 128 00:14:48.118 } 00:14:48.118 }, 00:14:48.118 { 00:14:48.118 "method": "bdev_raid_set_options", 00:14:48.118 "params": { 00:14:48.118 "process_window_size_kb": 1024 00:14:48.118 } 00:14:48.118 }, 00:14:48.118 { 00:14:48.118 "method": "bdev_iscsi_set_options", 00:14:48.118 "params": { 00:14:48.118 "timeout_sec": 30 00:14:48.118 } 00:14:48.118 }, 00:14:48.118 { 00:14:48.118 "method": "bdev_nvme_set_options", 00:14:48.118 "params": { 00:14:48.118 "action_on_timeout": "none", 00:14:48.118 "allow_accel_sequence": false, 00:14:48.118 "arbitration_burst": 0, 00:14:48.118 "bdev_retry_count": 3, 00:14:48.118 "ctrlr_loss_timeout_sec": 0, 00:14:48.118 "delay_cmd_submit": true, 00:14:48.118 "dhchap_dhgroups": [ 00:14:48.118 "null", 00:14:48.118 "ffdhe2048", 00:14:48.118 "ffdhe3072", 00:14:48.118 "ffdhe4096", 00:14:48.118 "ffdhe6144", 00:14:48.118 "ffdhe8192" 00:14:48.118 ], 00:14:48.118 "dhchap_digests": [ 00:14:48.118 "sha256", 00:14:48.118 "sha384", 00:14:48.118 "sha512" 00:14:48.118 ], 00:14:48.118 "disable_auto_failback": false, 00:14:48.118 "fast_io_fail_timeout_sec": 0, 00:14:48.118 "generate_uuids": false, 00:14:48.118 "high_priority_weight": 0, 00:14:48.118 "io_path_stat": false, 00:14:48.118 "io_queue_requests": 512, 00:14:48.118 "keep_alive_timeout_ms": 10000, 00:14:48.118 "low_priority_weight": 0, 00:14:48.118 "medium_priority_weight": 0, 00:14:48.118 "nvme_adminq_poll_period_us": 10000, 00:14:48.118 "nvme_error_stat": false, 00:14:48.118 "nvme_ioq_poll_period_us": 0, 00:14:48.118 "rdma_cm_event_timeout_ms": 0, 00:14:48.118 "rdma_max_cq_size": 0, 00:14:48.118 "rdma_srq_size": 0, 00:14:48.118 "reconnect_delay_sec": 0, 00:14:48.118 "timeout_admin_us": 0, 00:14:48.118 "timeout_us": 0, 00:14:48.118 "transport_ack_timeout": 0, 00:14:48.118 "transport_retry_count": 4, 00:14:48.118 "transport_tos": 0 00:14:48.118 } 00:14:48.118 }, 00:14:48.118 { 00:14:48.118 "method": "bdev_nvme_attach_controller", 00:14:48.118 "params": { 00:14:48.118 "adrfam": "IPv4", 00:14:48.118 "ctrlr_loss_timeout_sec": 0, 00:14:48.118 "ddgst": false, 00:14:48.118 "fast_io_fail_timeout_sec": 0, 00:14:48.118 "hdgst": false, 00:14:48.118 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:48.118 "name": "nvme0", 00:14:48.118 "prchk_guard": false, 00:14:48.118 "prchk_reftag": false, 00:14:48.118 "psk": "key0", 00:14:48.118 "reconnect_delay_sec": 0, 00:14:48.118 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:48.118 "traddr": "10.0.0.2", 00:14:48.118 "trsvcid": "4420", 00:14:48.118 "trtype": "TCP" 00:14:48.118 } 00:14:48.118 }, 00:14:48.118 { 00:14:48.118 "method": "bdev_nvme_set_hotplug", 00:14:48.118 "params": { 00:14:48.118 "enable": false, 00:14:48.118 "period_us": 100000 00:14:48.118 } 00:14:48.118 }, 00:14:48.118 { 00:14:48.118 "method": "bdev_enable_histogram", 00:14:48.118 "params": { 00:14:48.118 "enable": true, 00:14:48.118 "name": "nvme0n1" 00:14:48.118 } 00:14:48.118 }, 00:14:48.118 { 00:14:48.118 "method": "bdev_wait_for_examine" 00:14:48.118 } 00:14:48.118 ] 00:14:48.118 }, 00:14:48.118 { 00:14:48.118 "subsystem": "nbd", 00:14:48.118 "config": [] 00:14:48.118 } 00:14:48.118 ] 00:14:48.118 }' 00:14:48.118 16:26:22 -- target/tls.sh@266 -- # killprocess 78631 00:14:48.118 16:26:22 -- common/autotest_common.sh@936 -- # '[' -z 78631 ']' 00:14:48.118 16:26:22 -- common/autotest_common.sh@940 -- # kill -0 78631 00:14:48.118 16:26:22 -- common/autotest_common.sh@941 -- # uname 00:14:48.118 16:26:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:48.118 16:26:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78631 00:14:48.118 16:26:22 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:48.118 16:26:22 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:48.118 killing process with pid 78631 00:14:48.118 16:26:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78631' 00:14:48.118 Received shutdown signal, test time was about 1.000000 seconds 00:14:48.118 00:14:48.118 Latency(us) 00:14:48.118 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.118 =================================================================================================================== 00:14:48.118 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:48.119 16:26:22 -- common/autotest_common.sh@955 -- # kill 78631 00:14:48.119 16:26:22 -- common/autotest_common.sh@960 -- # wait 78631 00:14:48.376 16:26:22 -- target/tls.sh@267 -- # killprocess 78581 00:14:48.376 16:26:22 -- common/autotest_common.sh@936 -- # '[' -z 78581 ']' 00:14:48.376 16:26:22 -- common/autotest_common.sh@940 -- # kill -0 78581 00:14:48.376 16:26:22 -- common/autotest_common.sh@941 -- # uname 00:14:48.376 16:26:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:48.376 16:26:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78581 00:14:48.376 16:26:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:48.376 16:26:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:48.376 killing process with pid 78581 00:14:48.376 16:26:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78581' 00:14:48.376 16:26:22 -- common/autotest_common.sh@955 -- # kill 78581 00:14:48.376 16:26:22 -- common/autotest_common.sh@960 -- # wait 78581 00:14:48.634 16:26:22 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:14:48.634 16:26:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:48.634 16:26:22 -- target/tls.sh@269 -- # echo '{ 00:14:48.634 "subsystems": [ 00:14:48.634 { 00:14:48.634 "subsystem": "keyring", 00:14:48.634 "config": [ 00:14:48.634 { 00:14:48.634 "method": "keyring_file_add_key", 00:14:48.634 "params": { 00:14:48.634 "name": "key0", 00:14:48.634 "path": "/tmp/tmp.7ZxTJDjXUS" 00:14:48.634 } 00:14:48.634 } 00:14:48.634 ] 00:14:48.634 }, 00:14:48.634 { 00:14:48.634 "subsystem": "iobuf", 00:14:48.634 "config": [ 00:14:48.634 { 00:14:48.634 "method": "iobuf_set_options", 00:14:48.634 "params": { 00:14:48.634 "large_bufsize": 135168, 00:14:48.634 "large_pool_count": 1024, 00:14:48.634 "small_bufsize": 8192, 00:14:48.634 "small_pool_count": 8192 00:14:48.634 } 00:14:48.634 } 00:14:48.634 ] 00:14:48.634 }, 00:14:48.634 { 00:14:48.634 "subsystem": "sock", 00:14:48.634 "config": [ 00:14:48.634 { 00:14:48.634 "method": "sock_impl_set_options", 00:14:48.634 "params": { 00:14:48.634 "enable_ktls": false, 00:14:48.634 "enable_placement_id": 0, 00:14:48.634 "enable_quickack": false, 00:14:48.634 "enable_recv_pipe": true, 00:14:48.634 "enable_zerocopy_send_client": false, 00:14:48.634 "enable_zerocopy_send_server": true, 00:14:48.634 "impl_name": "posix", 00:14:48.634 "recv_buf_size": 2097152, 00:14:48.634 "send_buf_size": 2097152, 00:14:48.634 "tls_version": 0, 00:14:48.634 "zerocopy_threshold": 0 00:14:48.634 } 00:14:48.634 }, 00:14:48.634 { 00:14:48.634 "method": "sock_impl_set_options", 00:14:48.634 "params": { 00:14:48.634 "enable_ktls": false, 00:14:48.634 "enable_placement_id": 0, 00:14:48.634 "enable_quickack": false, 00:14:48.634 "enable_recv_pipe": true, 00:14:48.634 "enable_zerocopy_send_client": false, 00:14:48.634 "enable_zerocopy_send_server": true, 00:14:48.634 "impl_name": "ssl", 00:14:48.634 "recv_buf_size": 4096, 00:14:48.634 "send_buf_size": 4096, 00:14:48.634 "tls_version": 0, 00:14:48.634 "zerocopy_threshold": 0 00:14:48.634 } 00:14:48.634 } 00:14:48.634 ] 00:14:48.634 }, 00:14:48.634 { 00:14:48.634 "subsystem": "vmd", 00:14:48.634 "config": [] 00:14:48.634 }, 00:14:48.634 { 00:14:48.634 "subsystem": "accel", 00:14:48.634 "config": [ 00:14:48.634 { 00:14:48.634 "method": "accel_set_options", 00:14:48.634 "params": { 00:14:48.634 "buf_count": 2048, 00:14:48.634 "large_cache_size": 16, 00:14:48.634 "sequence_count": 2048, 00:14:48.634 "small_cache_size": 128, 00:14:48.634 "task_count": 2048 00:14:48.634 } 00:14:48.634 } 00:14:48.634 ] 00:14:48.634 }, 00:14:48.634 { 00:14:48.634 "subsystem": "bdev", 00:14:48.634 "config": [ 00:14:48.634 { 00:14:48.634 "method": "bdev_set_options", 00:14:48.634 "params": { 00:14:48.634 "bdev_auto_examine": true, 00:14:48.634 "bdev_io_cache_size": 256, 00:14:48.634 "bdev_io_pool_size": 65535, 00:14:48.634 "iobuf_large_cache_size": 16, 00:14:48.634 "iobuf_small_cache_size": 128 00:14:48.634 } 00:14:48.634 }, 00:14:48.634 { 00:14:48.634 "method": "bdev_raid_set_options", 00:14:48.634 "params": { 00:14:48.634 "process_window_size_kb": 1024 00:14:48.634 } 00:14:48.634 }, 00:14:48.634 { 00:14:48.634 "method": "bdev_iscsi_set_options", 00:14:48.634 "params": { 00:14:48.634 "timeout_sec": 30 00:14:48.634 } 00:14:48.634 }, 00:14:48.634 { 00:14:48.634 "method": "bdev_nvme_set_options", 00:14:48.634 "params": { 00:14:48.634 "action_on_timeout": "none", 00:14:48.634 "allow_accel_sequence": false, 00:14:48.634 "arbitration_burst": 0, 00:14:48.634 "bdev_retry_count": 3, 00:14:48.634 "ctrlr_loss_timeout_sec": 0, 00:14:48.634 "delay_cmd_submit": true, 00:14:48.634 "dhchap_dhgroups": [ 00:14:48.634 "null", 00:14:48.634 "ffdhe2048", 00:14:48.634 "ffdhe3072", 00:14:48.634 "ffdhe4096", 00:14:48.634 "ffdhe6144", 00:14:48.634 "ffdhe8192" 00:14:48.634 ], 00:14:48.634 "dhchap_digests": [ 00:14:48.634 "sha256", 00:14:48.634 "sha384", 00:14:48.634 "sha512" 00:14:48.634 ], 00:14:48.634 "disable_auto_failback": false, 00:14:48.634 "fast_io_fail_timeout_sec": 0, 00:14:48.634 "generate_uuids": false, 00:14:48.634 "high_priority_weight": 0, 00:14:48.634 "io_path_stat": false, 00:14:48.634 "io_queue_requests": 0, 00:14:48.634 "keep_alive_timeout_ms": 10000, 00:14:48.634 "low_priority_weight": 0, 00:14:48.634 "medium_priority_weight": 0, 00:14:48.634 "nvme_adminq_poll_period_us": 10000, 00:14:48.634 "nvme_error_stat": false, 00:14:48.634 "nvme_ioq_poll_period_us": 0, 00:14:48.634 "rdma_cm_event_timeout_ms": 0, 00:14:48.634 "rdma_max_cq_size": 0, 00:14:48.634 "rdma_srq_size": 0, 00:14:48.634 "reconnect_delay_sec": 0, 00:14:48.634 "timeout_admin_us": 0, 00:14:48.634 "timeout_us": 0, 00:14:48.634 "transport_ack_timeout": 0, 00:14:48.634 "transport_retry_count": 4, 00:14:48.634 "transport_tos": 0 00:14:48.634 } 00:14:48.634 }, 00:14:48.634 { 00:14:48.634 "method": "bdev_nvme_set_hotplug", 00:14:48.634 "params": { 00:14:48.634 "enable": false, 00:14:48.634 "period_us": 100000 00:14:48.634 } 00:14:48.634 }, 00:14:48.634 { 00:14:48.634 "method": "bdev_malloc_create", 00:14:48.634 "params": { 00:14:48.634 "block_size": 4096, 00:14:48.634 "name": "malloc0", 00:14:48.634 "num_blocks": 8192, 00:14:48.634 "optimal_io_boundary": 0, 00:14:48.634 "physical_block_size": 4096, 00:14:48.634 "uuid": "f37b5eb2-ad0d-4d2c-8dc2-923f3ae91812" 00:14:48.634 } 00:14:48.634 }, 00:14:48.634 { 00:14:48.634 "method": "bdev_wait_for_examine" 00:14:48.634 } 00:14:48.634 ] 00:14:48.634 }, 00:14:48.634 { 00:14:48.634 "subsystem": "nbd", 00:14:48.634 "config": [] 00:14:48.634 }, 00:14:48.634 { 00:14:48.634 "subsystem": "scheduler", 00:14:48.634 "config": [ 00:14:48.634 { 00:14:48.634 "method": "framework_set_scheduler", 00:14:48.634 "params": { 00:14:48.634 "name": "static" 00:14:48.634 } 00:14:48.634 } 00:14:48.634 ] 00:14:48.634 }, 00:14:48.634 { 00:14:48.634 "subsystem": "nvmf", 00:14:48.634 "config": [ 00:14:48.634 { 00:14:48.634 "method": "nvmf_set_config", 00:14:48.634 "params": { 00:14:48.634 "admin_cmd_passthru": { 00:14:48.634 "identify_ctrlr": false 00:14:48.634 }, 00:14:48.635 "discovery_filter": "match_any" 00:14:48.635 } 00:14:48.635 }, 00:14:48.635 { 00:14:48.635 "method": "nvmf_set_max_subsystems", 00:14:48.635 "params": { 00:14:48.635 "max_subsystems": 1024 00:14:48.635 } 00:14:48.635 }, 00:14:48.635 { 00:14:48.635 "method": "nvmf_set_crdt", 00:14:48.635 "params": { 00:14:48.635 "crdt1": 0, 00:14:48.635 "crdt2": 0, 00:14:48.635 "crdt3": 0 00:14:48.635 } 00:14:48.635 }, 00:14:48.635 { 00:14:48.635 "method": "nvmf_create_transport", 00:14:48.635 "params": { 00:14:48.635 "abort_timeout_sec": 1, 00:14:48.635 "ack_timeout": 0, 00:14:48.635 "buf_cache_size": 4294967295, 00:14:48.635 "c2h_success": false, 00:14:48.635 "dif_insert_or_strip": false, 00:14:48.635 "in_capsule_data_size": 4096, 00:14:48.635 "io_unit_size": 131072, 00:14:48.635 "max_aq_depth": 128, 00:14:48.635 "max_io_qpairs_per_ctrlr": 127, 00:14:48.635 "max_io_size": 131072, 00:14:48.635 "max_queue_depth": 128, 00:14:48.635 "num_shared_buffers": 511, 00:14:48.635 "sock_priority": 0, 00:14:48.635 "trtype": "TCP", 00:14:48.635 "zcopy": false 00:14:48.635 } 00:14:48.635 }, 00:14:48.635 { 00:14:48.635 "method": "nvmf_create_subsystem", 00:14:48.635 "params": { 00:14:48.635 "allow_any_host": false, 00:14:48.635 "ana_reporting": false, 00:14:48.635 "max_cntlid": 65519, 00:14:48.635 "max_namespaces": 32, 00:14:48.635 "min_cntlid": 1, 00:14:48.635 "model_number": "SPDK bdev Controller", 00:14:48.635 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:48.635 "serial_number": "00000000000000000000" 00:14:48.635 } 00:14:48.635 }, 00:14:48.635 { 00:14:48.635 "method": "nvmf_subsystem_add_host", 00:14:48.635 "params": { 00:14:48.635 "host": "nqn.2016-06.io.spdk:host1", 00:14:48.635 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:48.635 "psk": "key0" 00:14:48.635 } 00:14:48.635 }, 00:14:48.635 { 00:14:48.635 "method": "nvmf_subsystem_add_ns", 00:14:48.635 "params": { 00:14:48.635 "namespace": { 00:14:48.635 "bdev_name": "malloc0", 00:14:48.635 "nguid": "F37B5EB2AD0D4D2C8DC2923F3AE91812", 00:14:48.635 "no_auto_visible": false, 00:14:48.635 "nsid": 1, 00:14:48.635 "uuid": "f37b5eb2-ad0d-4d2c-8dc2-923f3ae91812" 00:14:48.635 }, 00:14:48.635 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:14:48.635 } 00:14:48.635 }, 00:14:48.635 { 00:14:48.635 "method": "nvmf_subsystem_add_listener", 00:14:48.635 "params": { 00:14:48.635 "listen_address": { 00:14:48.635 "adrfam": "IPv4", 00:14:48.635 "traddr": "10.0.0.2", 00:14:48.635 "trsvcid": "4420", 00:14:48.635 "trtype": "TCP" 00:14:48.635 }, 00:14:48.635 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:48.635 "secure_channel": true 00:14:48.635 } 00:14:48.635 } 00:14:48.635 ] 00:14:48.635 } 00:14:48.635 ] 00:14:48.635 }' 00:14:48.635 16:26:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:48.635 16:26:22 -- common/autotest_common.sh@10 -- # set +x 00:14:48.635 16:26:22 -- nvmf/common.sh@470 -- # nvmfpid=78722 00:14:48.635 16:26:22 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:14:48.635 16:26:22 -- nvmf/common.sh@471 -- # waitforlisten 78722 00:14:48.635 16:26:22 -- common/autotest_common.sh@817 -- # '[' -z 78722 ']' 00:14:48.635 16:26:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.635 16:26:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:48.635 16:26:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.635 16:26:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:48.635 16:26:22 -- common/autotest_common.sh@10 -- # set +x 00:14:48.893 [2024-04-17 16:26:22.686998] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:14:48.893 [2024-04-17 16:26:22.687080] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:48.893 [2024-04-17 16:26:22.820721] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.152 [2024-04-17 16:26:22.940974] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:49.152 [2024-04-17 16:26:22.941029] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:49.152 [2024-04-17 16:26:22.941042] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:49.152 [2024-04-17 16:26:22.941050] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:49.152 [2024-04-17 16:26:22.941058] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:49.152 [2024-04-17 16:26:22.941149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.152 [2024-04-17 16:26:23.178985] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:49.410 [2024-04-17 16:26:23.210927] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:49.410 [2024-04-17 16:26:23.211145] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:49.977 16:26:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:49.977 16:26:23 -- common/autotest_common.sh@850 -- # return 0 00:14:49.977 16:26:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:49.977 16:26:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:49.977 16:26:23 -- common/autotest_common.sh@10 -- # set +x 00:14:49.977 16:26:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:49.977 16:26:23 -- target/tls.sh@272 -- # bdevperf_pid=78766 00:14:49.977 16:26:23 -- target/tls.sh@273 -- # waitforlisten 78766 /var/tmp/bdevperf.sock 00:14:49.977 16:26:23 -- common/autotest_common.sh@817 -- # '[' -z 78766 ']' 00:14:49.977 16:26:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:49.977 16:26:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:49.977 16:26:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:49.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:49.977 16:26:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:49.977 16:26:23 -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:14:49.977 16:26:23 -- common/autotest_common.sh@10 -- # set +x 00:14:49.977 16:26:23 -- target/tls.sh@270 -- # echo '{ 00:14:49.977 "subsystems": [ 00:14:49.977 { 00:14:49.977 "subsystem": "keyring", 00:14:49.977 "config": [ 00:14:49.977 { 00:14:49.977 "method": "keyring_file_add_key", 00:14:49.977 "params": { 00:14:49.977 "name": "key0", 00:14:49.977 "path": "/tmp/tmp.7ZxTJDjXUS" 00:14:49.977 } 00:14:49.977 } 00:14:49.977 ] 00:14:49.977 }, 00:14:49.977 { 00:14:49.977 "subsystem": "iobuf", 00:14:49.977 "config": [ 00:14:49.977 { 00:14:49.977 "method": "iobuf_set_options", 00:14:49.977 "params": { 00:14:49.977 "large_bufsize": 135168, 00:14:49.977 "large_pool_count": 1024, 00:14:49.977 "small_bufsize": 8192, 00:14:49.977 "small_pool_count": 8192 00:14:49.977 } 00:14:49.977 } 00:14:49.977 ] 00:14:49.977 }, 00:14:49.977 { 00:14:49.978 "subsystem": "sock", 00:14:49.978 "config": [ 00:14:49.978 { 00:14:49.978 "method": "sock_impl_set_options", 00:14:49.978 "params": { 00:14:49.978 "enable_ktls": false, 00:14:49.978 "enable_placement_id": 0, 00:14:49.978 "enable_quickack": false, 00:14:49.978 "enable_recv_pipe": true, 00:14:49.978 "enable_zerocopy_send_client": false, 00:14:49.978 "enable_zerocopy_send_server": true, 00:14:49.978 "impl_name": "posix", 00:14:49.978 "recv_buf_size": 2097152, 00:14:49.978 "send_buf_size": 2097152, 00:14:49.978 "tls_version": 0, 00:14:49.978 "zerocopy_threshold": 0 00:14:49.978 } 00:14:49.978 }, 00:14:49.978 { 00:14:49.978 "method": "sock_impl_set_options", 00:14:49.978 "params": { 00:14:49.978 "enable_ktls": false, 00:14:49.978 "enable_placement_id": 0, 00:14:49.978 "enable_quickack": false, 00:14:49.978 "enable_recv_pipe": true, 00:14:49.978 "enable_zerocopy_send_client": false, 00:14:49.978 "enable_zerocopy_send_server": true, 00:14:49.978 "impl_name": "ssl", 00:14:49.978 "recv_buf_size": 4096, 00:14:49.978 "send_buf_size": 4096, 00:14:49.978 "tls_version": 0, 00:14:49.978 "zerocopy_threshold": 0 00:14:49.978 } 00:14:49.978 } 00:14:49.978 ] 00:14:49.978 }, 00:14:49.978 { 00:14:49.978 "subsystem": "vmd", 00:14:49.978 "config": [] 00:14:49.978 }, 00:14:49.978 { 00:14:49.978 "subsystem": "accel", 00:14:49.978 "config": [ 00:14:49.978 { 00:14:49.978 "method": "accel_set_options", 00:14:49.978 "params": { 00:14:49.978 "buf_count": 2048, 00:14:49.978 "large_cache_size": 16, 00:14:49.978 "sequence_count": 2048, 00:14:49.978 "small_cache_size": 128, 00:14:49.978 "task_count": 2048 00:14:49.978 } 00:14:49.978 } 00:14:49.978 ] 00:14:49.978 }, 00:14:49.978 { 00:14:49.978 "subsystem": "bdev", 00:14:49.978 "config": [ 00:14:49.978 { 00:14:49.978 "method": "bdev_set_options", 00:14:49.978 "params": { 00:14:49.978 "bdev_auto_examine": true, 00:14:49.978 "bdev_io_cache_size": 256, 00:14:49.978 "bdev_io_pool_size": 65535, 00:14:49.978 "iobuf_large_cache_size": 16, 00:14:49.978 "iobuf_small_cache_size": 128 00:14:49.978 } 00:14:49.978 }, 00:14:49.978 { 00:14:49.978 "method": "bdev_raid_set_options", 00:14:49.978 "params": { 00:14:49.978 "process_window_size_kb": 1024 00:14:49.978 } 00:14:49.978 }, 00:14:49.978 { 00:14:49.978 "method": "bdev_iscsi_set_options", 00:14:49.978 "params": { 00:14:49.978 "timeout_sec": 30 00:14:49.978 } 00:14:49.978 }, 00:14:49.978 { 00:14:49.978 "method": "bdev_nvme_set_options", 00:14:49.978 "params": { 00:14:49.978 "action_on_timeout": "none", 00:14:49.978 "allow_accel_sequence": false, 00:14:49.978 "arbitration_burst": 0, 00:14:49.978 "bdev_retry_count": 3, 00:14:49.978 "ctrlr_loss_timeout_sec": 0, 00:14:49.978 "delay_cmd_submit": true, 00:14:49.978 "dhchap_dhgroups": [ 00:14:49.978 "null", 00:14:49.978 "ffdhe2048", 00:14:49.978 "ffdhe3072", 00:14:49.978 "ffdhe4096", 00:14:49.978 "ffdhe6144", 00:14:49.978 "ffdhe8192" 00:14:49.978 ], 00:14:49.978 "dhchap_digests": [ 00:14:49.978 "sha256", 00:14:49.978 "sha384", 00:14:49.978 "sha512" 00:14:49.978 ], 00:14:49.978 "disable_auto_failback": false, 00:14:49.978 "fast_io_fail_timeout_sec": 0, 00:14:49.978 "generate_uuids": false, 00:14:49.978 "high_priority_weight": 0, 00:14:49.978 "io_path_stat": false, 00:14:49.978 "io_queue_requests": 512, 00:14:49.978 "keep_alive_timeout_ms": 10000, 00:14:49.978 "low_priority_weight": 0, 00:14:49.978 "medium_priority_weight": 0, 00:14:49.978 "nvme_adminq_poll_period_us": 10000, 00:14:49.978 "nvme_error_stat": false, 00:14:49.978 "nvme_ioq_poll_period_us": 0, 00:14:49.978 "rdma_cm_event_timeout_ms": 0, 00:14:49.978 "rdma_max_cq_size": 0, 00:14:49.978 "rdma_srq_size": 0, 00:14:49.978 "reconnect_delay_sec": 0, 00:14:49.978 "timeout_admin_us": 0, 00:14:49.978 "timeout_us": 0, 00:14:49.978 "transport_ack_timeout": 0, 00:14:49.978 "transport_retry_count": 4, 00:14:49.978 "transport_tos": 0 00:14:49.978 } 00:14:49.978 }, 00:14:49.978 { 00:14:49.978 "method": "bdev_nvme_attach_controller", 00:14:49.978 "params": { 00:14:49.978 "adrfam": "IPv4", 00:14:49.978 "ctrlr_loss_timeout_sec": 0, 00:14:49.978 "ddgst": false, 00:14:49.978 "fast_io_fail_timeout_sec": 0, 00:14:49.978 "hdgst": false, 00:14:49.978 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:49.978 "name": "nvme0", 00:14:49.978 "prchk_guard": false, 00:14:49.978 "prchk_reftag": false, 00:14:49.978 "psk": "key0", 00:14:49.978 "reconnect_delay_sec": 0, 00:14:49.978 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:49.978 "traddr": "10.0.0.2", 00:14:49.978 "trsvcid": "4420", 00:14:49.978 "trtype": "TCP" 00:14:49.978 } 00:14:49.978 }, 00:14:49.978 { 00:14:49.978 "method": "bdev_nvme_set_hotplug", 00:14:49.978 "params": { 00:14:49.978 "enable": false, 00:14:49.978 "period_us": 100000 00:14:49.978 } 00:14:49.978 }, 00:14:49.978 { 00:14:49.978 "method": "bdev_enable_histogram", 00:14:49.978 "params": { 00:14:49.978 "enable": true, 00:14:49.978 "name": "nvme0n1" 00:14:49.978 } 00:14:49.978 }, 00:14:49.978 { 00:14:49.978 "method": "bdev_wait_for_examine" 00:14:49.978 } 00:14:49.978 ] 00:14:49.978 }, 00:14:49.978 { 00:14:49.978 "subsystem": "nbd", 00:14:49.978 "config": [] 00:14:49.978 } 00:14:49.978 ] 00:14:49.978 }' 00:14:49.978 [2024-04-17 16:26:23.816902] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:14:49.978 [2024-04-17 16:26:23.817014] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78766 ] 00:14:49.978 [2024-04-17 16:26:23.957884] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.237 [2024-04-17 16:26:24.083218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:50.237 [2024-04-17 16:26:24.253972] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:50.804 16:26:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:50.804 16:26:24 -- common/autotest_common.sh@850 -- # return 0 00:14:50.804 16:26:24 -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:50.804 16:26:24 -- target/tls.sh@275 -- # jq -r '.[].name' 00:14:51.063 16:26:25 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.063 16:26:25 -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:51.322 Running I/O for 1 seconds... 00:14:52.257 00:14:52.257 Latency(us) 00:14:52.257 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:52.257 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:52.257 Verification LBA range: start 0x0 length 0x2000 00:14:52.257 nvme0n1 : 1.03 3856.56 15.06 0.00 0.00 32818.12 7149.38 19660.80 00:14:52.257 =================================================================================================================== 00:14:52.257 Total : 3856.56 15.06 0.00 0.00 32818.12 7149.38 19660.80 00:14:52.257 0 00:14:52.257 16:26:26 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:14:52.257 16:26:26 -- target/tls.sh@279 -- # cleanup 00:14:52.257 16:26:26 -- target/tls.sh@15 -- # process_shm --id 0 00:14:52.257 16:26:26 -- common/autotest_common.sh@794 -- # type=--id 00:14:52.257 16:26:26 -- common/autotest_common.sh@795 -- # id=0 00:14:52.257 16:26:26 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:14:52.257 16:26:26 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:52.257 16:26:26 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:14:52.257 16:26:26 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:14:52.257 16:26:26 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:14:52.257 16:26:26 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:52.257 nvmf_trace.0 00:14:52.257 16:26:26 -- common/autotest_common.sh@809 -- # return 0 00:14:52.257 16:26:26 -- target/tls.sh@16 -- # killprocess 78766 00:14:52.257 16:26:26 -- common/autotest_common.sh@936 -- # '[' -z 78766 ']' 00:14:52.257 16:26:26 -- common/autotest_common.sh@940 -- # kill -0 78766 00:14:52.257 16:26:26 -- common/autotest_common.sh@941 -- # uname 00:14:52.257 16:26:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:52.257 16:26:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78766 00:14:52.257 16:26:26 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:52.257 killing process with pid 78766 00:14:52.257 16:26:26 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:52.257 16:26:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78766' 00:14:52.257 Received shutdown signal, test time was about 1.000000 seconds 00:14:52.257 00:14:52.257 Latency(us) 00:14:52.257 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:52.257 =================================================================================================================== 00:14:52.257 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:52.257 16:26:26 -- common/autotest_common.sh@955 -- # kill 78766 00:14:52.257 16:26:26 -- common/autotest_common.sh@960 -- # wait 78766 00:14:52.515 16:26:26 -- target/tls.sh@17 -- # nvmftestfini 00:14:52.515 16:26:26 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:52.515 16:26:26 -- nvmf/common.sh@117 -- # sync 00:14:52.774 16:26:26 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:52.774 16:26:26 -- nvmf/common.sh@120 -- # set +e 00:14:52.774 16:26:26 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:52.774 16:26:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:52.774 rmmod nvme_tcp 00:14:52.774 rmmod nvme_fabrics 00:14:52.774 rmmod nvme_keyring 00:14:52.774 16:26:26 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:52.774 16:26:26 -- nvmf/common.sh@124 -- # set -e 00:14:52.774 16:26:26 -- nvmf/common.sh@125 -- # return 0 00:14:52.774 16:26:26 -- nvmf/common.sh@478 -- # '[' -n 78722 ']' 00:14:52.774 16:26:26 -- nvmf/common.sh@479 -- # killprocess 78722 00:14:52.774 16:26:26 -- common/autotest_common.sh@936 -- # '[' -z 78722 ']' 00:14:52.774 16:26:26 -- common/autotest_common.sh@940 -- # kill -0 78722 00:14:52.774 16:26:26 -- common/autotest_common.sh@941 -- # uname 00:14:52.774 16:26:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:52.774 16:26:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78722 00:14:52.774 16:26:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:52.774 killing process with pid 78722 00:14:52.774 16:26:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:52.774 16:26:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78722' 00:14:52.774 16:26:26 -- common/autotest_common.sh@955 -- # kill 78722 00:14:52.774 16:26:26 -- common/autotest_common.sh@960 -- # wait 78722 00:14:53.033 16:26:26 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:53.033 16:26:26 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:53.033 16:26:26 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:53.033 16:26:26 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:53.033 16:26:26 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:53.033 16:26:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.033 16:26:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:53.033 16:26:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.033 16:26:26 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:53.033 16:26:26 -- target/tls.sh@18 -- # rm -f /tmp/tmp.VPlsmugWVR /tmp/tmp.Zt1HImE6Hl /tmp/tmp.7ZxTJDjXUS 00:14:53.033 ************************************ 00:14:53.033 END TEST nvmf_tls 00:14:53.033 ************************************ 00:14:53.033 00:14:53.033 real 1m30.014s 00:14:53.033 user 2m24.706s 00:14:53.033 sys 0m28.343s 00:14:53.033 16:26:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:53.033 16:26:26 -- common/autotest_common.sh@10 -- # set +x 00:14:53.033 16:26:27 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:53.033 16:26:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:53.033 16:26:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:53.033 16:26:27 -- common/autotest_common.sh@10 -- # set +x 00:14:53.292 ************************************ 00:14:53.292 START TEST nvmf_fips 00:14:53.292 ************************************ 00:14:53.292 16:26:27 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:53.292 * Looking for test storage... 00:14:53.292 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:14:53.292 16:26:27 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:53.292 16:26:27 -- nvmf/common.sh@7 -- # uname -s 00:14:53.292 16:26:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:53.292 16:26:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:53.292 16:26:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:53.292 16:26:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:53.292 16:26:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:53.292 16:26:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:53.292 16:26:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:53.292 16:26:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:53.292 16:26:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:53.292 16:26:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:53.292 16:26:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:14:53.292 16:26:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:14:53.292 16:26:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:53.292 16:26:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:53.292 16:26:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:53.292 16:26:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:53.292 16:26:27 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:53.292 16:26:27 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:53.292 16:26:27 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:53.292 16:26:27 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:53.292 16:26:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.292 16:26:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.292 16:26:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.292 16:26:27 -- paths/export.sh@5 -- # export PATH 00:14:53.292 16:26:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.292 16:26:27 -- nvmf/common.sh@47 -- # : 0 00:14:53.292 16:26:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:53.292 16:26:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:53.292 16:26:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:53.292 16:26:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:53.292 16:26:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:53.292 16:26:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:53.292 16:26:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:53.292 16:26:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:53.292 16:26:27 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:53.292 16:26:27 -- fips/fips.sh@89 -- # check_openssl_version 00:14:53.292 16:26:27 -- fips/fips.sh@83 -- # local target=3.0.0 00:14:53.292 16:26:27 -- fips/fips.sh@85 -- # openssl version 00:14:53.292 16:26:27 -- fips/fips.sh@85 -- # awk '{print $2}' 00:14:53.292 16:26:27 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:14:53.292 16:26:27 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:14:53.292 16:26:27 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:14:53.292 16:26:27 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:14:53.292 16:26:27 -- scripts/common.sh@333 -- # IFS=.-: 00:14:53.292 16:26:27 -- scripts/common.sh@333 -- # read -ra ver1 00:14:53.292 16:26:27 -- scripts/common.sh@334 -- # IFS=.-: 00:14:53.292 16:26:27 -- scripts/common.sh@334 -- # read -ra ver2 00:14:53.292 16:26:27 -- scripts/common.sh@335 -- # local 'op=>=' 00:14:53.292 16:26:27 -- scripts/common.sh@337 -- # ver1_l=3 00:14:53.292 16:26:27 -- scripts/common.sh@338 -- # ver2_l=3 00:14:53.292 16:26:27 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:14:53.292 16:26:27 -- scripts/common.sh@341 -- # case "$op" in 00:14:53.292 16:26:27 -- scripts/common.sh@345 -- # : 1 00:14:53.292 16:26:27 -- scripts/common.sh@361 -- # (( v = 0 )) 00:14:53.292 16:26:27 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:53.292 16:26:27 -- scripts/common.sh@362 -- # decimal 3 00:14:53.292 16:26:27 -- scripts/common.sh@350 -- # local d=3 00:14:53.292 16:26:27 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:53.292 16:26:27 -- scripts/common.sh@352 -- # echo 3 00:14:53.292 16:26:27 -- scripts/common.sh@362 -- # ver1[v]=3 00:14:53.292 16:26:27 -- scripts/common.sh@363 -- # decimal 3 00:14:53.292 16:26:27 -- scripts/common.sh@350 -- # local d=3 00:14:53.292 16:26:27 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:53.292 16:26:27 -- scripts/common.sh@352 -- # echo 3 00:14:53.292 16:26:27 -- scripts/common.sh@363 -- # ver2[v]=3 00:14:53.292 16:26:27 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:53.292 16:26:27 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:14:53.292 16:26:27 -- scripts/common.sh@361 -- # (( v++ )) 00:14:53.292 16:26:27 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:53.292 16:26:27 -- scripts/common.sh@362 -- # decimal 0 00:14:53.292 16:26:27 -- scripts/common.sh@350 -- # local d=0 00:14:53.292 16:26:27 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:53.292 16:26:27 -- scripts/common.sh@352 -- # echo 0 00:14:53.292 16:26:27 -- scripts/common.sh@362 -- # ver1[v]=0 00:14:53.292 16:26:27 -- scripts/common.sh@363 -- # decimal 0 00:14:53.292 16:26:27 -- scripts/common.sh@350 -- # local d=0 00:14:53.292 16:26:27 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:53.292 16:26:27 -- scripts/common.sh@352 -- # echo 0 00:14:53.292 16:26:27 -- scripts/common.sh@363 -- # ver2[v]=0 00:14:53.292 16:26:27 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:53.292 16:26:27 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:14:53.292 16:26:27 -- scripts/common.sh@361 -- # (( v++ )) 00:14:53.292 16:26:27 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:53.292 16:26:27 -- scripts/common.sh@362 -- # decimal 9 00:14:53.292 16:26:27 -- scripts/common.sh@350 -- # local d=9 00:14:53.292 16:26:27 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:14:53.292 16:26:27 -- scripts/common.sh@352 -- # echo 9 00:14:53.292 16:26:27 -- scripts/common.sh@362 -- # ver1[v]=9 00:14:53.292 16:26:27 -- scripts/common.sh@363 -- # decimal 0 00:14:53.292 16:26:27 -- scripts/common.sh@350 -- # local d=0 00:14:53.292 16:26:27 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:53.292 16:26:27 -- scripts/common.sh@352 -- # echo 0 00:14:53.292 16:26:27 -- scripts/common.sh@363 -- # ver2[v]=0 00:14:53.292 16:26:27 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:53.292 16:26:27 -- scripts/common.sh@364 -- # return 0 00:14:53.292 16:26:27 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:14:53.292 16:26:27 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:14:53.292 16:26:27 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:14:53.292 16:26:27 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:14:53.292 16:26:27 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:14:53.292 16:26:27 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:14:53.292 16:26:27 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:14:53.292 16:26:27 -- fips/fips.sh@113 -- # build_openssl_config 00:14:53.292 16:26:27 -- fips/fips.sh@37 -- # cat 00:14:53.292 16:26:27 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:14:53.292 16:26:27 -- fips/fips.sh@58 -- # cat - 00:14:53.292 16:26:27 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:14:53.292 16:26:27 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:14:53.292 16:26:27 -- fips/fips.sh@116 -- # mapfile -t providers 00:14:53.292 16:26:27 -- fips/fips.sh@116 -- # openssl list -providers 00:14:53.292 16:26:27 -- fips/fips.sh@116 -- # grep name 00:14:53.552 16:26:27 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:14:53.552 16:26:27 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:14:53.552 16:26:27 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:14:53.552 16:26:27 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:14:53.552 16:26:27 -- fips/fips.sh@127 -- # : 00:14:53.552 16:26:27 -- common/autotest_common.sh@638 -- # local es=0 00:14:53.552 16:26:27 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:14:53.552 16:26:27 -- common/autotest_common.sh@626 -- # local arg=openssl 00:14:53.552 16:26:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:53.552 16:26:27 -- common/autotest_common.sh@630 -- # type -t openssl 00:14:53.552 16:26:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:53.552 16:26:27 -- common/autotest_common.sh@632 -- # type -P openssl 00:14:53.552 16:26:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:53.552 16:26:27 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:14:53.552 16:26:27 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:14:53.552 16:26:27 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:14:53.552 Error setting digest 00:14:53.552 00E21456DD7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:14:53.552 00E21456DD7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:14:53.552 16:26:27 -- common/autotest_common.sh@641 -- # es=1 00:14:53.552 16:26:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:53.552 16:26:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:53.552 16:26:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:53.552 16:26:27 -- fips/fips.sh@130 -- # nvmftestinit 00:14:53.552 16:26:27 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:53.552 16:26:27 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:53.552 16:26:27 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:53.552 16:26:27 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:53.552 16:26:27 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:53.552 16:26:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.552 16:26:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:53.552 16:26:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.552 16:26:27 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:14:53.552 16:26:27 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:14:53.552 16:26:27 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:14:53.552 16:26:27 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:14:53.552 16:26:27 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:14:53.552 16:26:27 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:14:53.552 16:26:27 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:53.552 16:26:27 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:53.552 16:26:27 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:53.552 16:26:27 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:53.552 16:26:27 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:53.552 16:26:27 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:53.552 16:26:27 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:53.552 16:26:27 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:53.552 16:26:27 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:53.552 16:26:27 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:53.552 16:26:27 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:53.552 16:26:27 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:53.552 16:26:27 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:53.552 16:26:27 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:53.552 Cannot find device "nvmf_tgt_br" 00:14:53.552 16:26:27 -- nvmf/common.sh@155 -- # true 00:14:53.552 16:26:27 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:53.552 Cannot find device "nvmf_tgt_br2" 00:14:53.552 16:26:27 -- nvmf/common.sh@156 -- # true 00:14:53.552 16:26:27 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:53.552 16:26:27 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:53.552 Cannot find device "nvmf_tgt_br" 00:14:53.552 16:26:27 -- nvmf/common.sh@158 -- # true 00:14:53.552 16:26:27 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:53.552 Cannot find device "nvmf_tgt_br2" 00:14:53.552 16:26:27 -- nvmf/common.sh@159 -- # true 00:14:53.552 16:26:27 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:53.552 16:26:27 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:53.552 16:26:27 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:53.552 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:53.552 16:26:27 -- nvmf/common.sh@162 -- # true 00:14:53.552 16:26:27 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:53.552 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:53.552 16:26:27 -- nvmf/common.sh@163 -- # true 00:14:53.553 16:26:27 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:53.553 16:26:27 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:53.553 16:26:27 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:53.553 16:26:27 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:53.812 16:26:27 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:53.812 16:26:27 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:53.812 16:26:27 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:53.812 16:26:27 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:53.812 16:26:27 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:53.812 16:26:27 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:53.812 16:26:27 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:53.812 16:26:27 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:53.812 16:26:27 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:53.812 16:26:27 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:53.812 16:26:27 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:53.812 16:26:27 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:53.812 16:26:27 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:53.812 16:26:27 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:53.812 16:26:27 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:53.812 16:26:27 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:53.812 16:26:27 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:53.812 16:26:27 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:53.812 16:26:27 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:53.812 16:26:27 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:53.812 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:53.812 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:14:53.812 00:14:53.812 --- 10.0.0.2 ping statistics --- 00:14:53.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.812 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:14:53.812 16:26:27 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:53.812 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:53.812 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:14:53.812 00:14:53.812 --- 10.0.0.3 ping statistics --- 00:14:53.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.812 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:14:53.812 16:26:27 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:53.812 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:53.812 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:14:53.812 00:14:53.812 --- 10.0.0.1 ping statistics --- 00:14:53.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.812 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:14:53.812 16:26:27 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:53.812 16:26:27 -- nvmf/common.sh@422 -- # return 0 00:14:53.812 16:26:27 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:53.812 16:26:27 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:53.812 16:26:27 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:53.812 16:26:27 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:53.812 16:26:27 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:53.812 16:26:27 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:53.812 16:26:27 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:53.812 16:26:27 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:14:53.812 16:26:27 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:53.812 16:26:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:53.812 16:26:27 -- common/autotest_common.sh@10 -- # set +x 00:14:53.812 16:26:27 -- nvmf/common.sh@470 -- # nvmfpid=79056 00:14:53.812 16:26:27 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:53.812 16:26:27 -- nvmf/common.sh@471 -- # waitforlisten 79056 00:14:53.812 16:26:27 -- common/autotest_common.sh@817 -- # '[' -z 79056 ']' 00:14:53.812 16:26:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.812 16:26:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:53.812 16:26:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.812 16:26:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:53.812 16:26:27 -- common/autotest_common.sh@10 -- # set +x 00:14:54.070 [2024-04-17 16:26:27.875097] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:14:54.070 [2024-04-17 16:26:27.875194] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:54.070 [2024-04-17 16:26:28.007172] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.342 [2024-04-17 16:26:28.130667] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:54.342 [2024-04-17 16:26:28.130745] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:54.342 [2024-04-17 16:26:28.130757] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:54.342 [2024-04-17 16:26:28.130766] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:54.342 [2024-04-17 16:26:28.130799] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:54.342 [2024-04-17 16:26:28.130839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:54.909 16:26:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:54.909 16:26:28 -- common/autotest_common.sh@850 -- # return 0 00:14:54.909 16:26:28 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:54.909 16:26:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:54.909 16:26:28 -- common/autotest_common.sh@10 -- # set +x 00:14:55.166 16:26:28 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:55.166 16:26:28 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:14:55.166 16:26:28 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:55.166 16:26:28 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:55.166 16:26:28 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:55.166 16:26:28 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:55.166 16:26:28 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:55.166 16:26:28 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:55.166 16:26:28 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:55.424 [2024-04-17 16:26:29.228838] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:55.424 [2024-04-17 16:26:29.244805] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:55.424 [2024-04-17 16:26:29.245015] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:55.425 [2024-04-17 16:26:29.276508] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:55.425 malloc0 00:14:55.425 16:26:29 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:55.425 16:26:29 -- fips/fips.sh@147 -- # bdevperf_pid=79109 00:14:55.425 16:26:29 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:55.425 16:26:29 -- fips/fips.sh@148 -- # waitforlisten 79109 /var/tmp/bdevperf.sock 00:14:55.425 16:26:29 -- common/autotest_common.sh@817 -- # '[' -z 79109 ']' 00:14:55.425 16:26:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:55.425 16:26:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:55.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:55.425 16:26:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:55.425 16:26:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:55.425 16:26:29 -- common/autotest_common.sh@10 -- # set +x 00:14:55.425 [2024-04-17 16:26:29.389107] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:14:55.425 [2024-04-17 16:26:29.389228] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79109 ] 00:14:55.682 [2024-04-17 16:26:29.529210] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.682 [2024-04-17 16:26:29.662553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:56.616 16:26:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:56.616 16:26:30 -- common/autotest_common.sh@850 -- # return 0 00:14:56.616 16:26:30 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:56.616 [2024-04-17 16:26:30.653851] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:56.616 [2024-04-17 16:26:30.653964] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:56.874 TLSTESTn1 00:14:56.874 16:26:30 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:56.874 Running I/O for 10 seconds... 00:15:06.854 00:15:06.854 Latency(us) 00:15:06.854 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:06.854 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:06.854 Verification LBA range: start 0x0 length 0x2000 00:15:06.854 TLSTESTn1 : 10.03 3776.81 14.75 0.00 0.00 33815.87 8638.84 33602.09 00:15:06.854 =================================================================================================================== 00:15:06.854 Total : 3776.81 14.75 0.00 0.00 33815.87 8638.84 33602.09 00:15:06.854 0 00:15:06.854 16:26:40 -- fips/fips.sh@1 -- # cleanup 00:15:06.854 16:26:40 -- fips/fips.sh@15 -- # process_shm --id 0 00:15:06.854 16:26:40 -- common/autotest_common.sh@794 -- # type=--id 00:15:06.854 16:26:40 -- common/autotest_common.sh@795 -- # id=0 00:15:06.854 16:26:40 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:15:06.854 16:26:40 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:07.114 16:26:40 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:15:07.114 16:26:40 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:15:07.114 16:26:40 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:15:07.114 16:26:40 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:07.114 nvmf_trace.0 00:15:07.114 16:26:40 -- common/autotest_common.sh@809 -- # return 0 00:15:07.114 16:26:40 -- fips/fips.sh@16 -- # killprocess 79109 00:15:07.114 16:26:40 -- common/autotest_common.sh@936 -- # '[' -z 79109 ']' 00:15:07.114 16:26:40 -- common/autotest_common.sh@940 -- # kill -0 79109 00:15:07.114 16:26:40 -- common/autotest_common.sh@941 -- # uname 00:15:07.114 16:26:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:07.114 16:26:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79109 00:15:07.114 16:26:41 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:15:07.114 16:26:41 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:15:07.114 killing process with pid 79109 00:15:07.114 16:26:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79109' 00:15:07.114 16:26:41 -- common/autotest_common.sh@955 -- # kill 79109 00:15:07.114 Received shutdown signal, test time was about 10.000000 seconds 00:15:07.114 00:15:07.114 Latency(us) 00:15:07.114 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.114 =================================================================================================================== 00:15:07.114 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:07.114 [2024-04-17 16:26:41.007264] app.c: 930:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:07.114 16:26:41 -- common/autotest_common.sh@960 -- # wait 79109 00:15:07.373 16:26:41 -- fips/fips.sh@17 -- # nvmftestfini 00:15:07.373 16:26:41 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:07.373 16:26:41 -- nvmf/common.sh@117 -- # sync 00:15:07.373 16:26:41 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:07.373 16:26:41 -- nvmf/common.sh@120 -- # set +e 00:15:07.373 16:26:41 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:07.373 16:26:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:07.373 rmmod nvme_tcp 00:15:07.373 rmmod nvme_fabrics 00:15:07.373 rmmod nvme_keyring 00:15:07.373 16:26:41 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:07.373 16:26:41 -- nvmf/common.sh@124 -- # set -e 00:15:07.373 16:26:41 -- nvmf/common.sh@125 -- # return 0 00:15:07.373 16:26:41 -- nvmf/common.sh@478 -- # '[' -n 79056 ']' 00:15:07.373 16:26:41 -- nvmf/common.sh@479 -- # killprocess 79056 00:15:07.373 16:26:41 -- common/autotest_common.sh@936 -- # '[' -z 79056 ']' 00:15:07.373 16:26:41 -- common/autotest_common.sh@940 -- # kill -0 79056 00:15:07.373 16:26:41 -- common/autotest_common.sh@941 -- # uname 00:15:07.373 16:26:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:07.373 16:26:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79056 00:15:07.373 16:26:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:07.373 killing process with pid 79056 00:15:07.373 16:26:41 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:07.373 16:26:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79056' 00:15:07.373 16:26:41 -- common/autotest_common.sh@955 -- # kill 79056 00:15:07.373 [2024-04-17 16:26:41.383364] app.c: 930:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:07.373 16:26:41 -- common/autotest_common.sh@960 -- # wait 79056 00:15:07.632 16:26:41 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:07.632 16:26:41 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:07.632 16:26:41 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:07.632 16:26:41 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:07.632 16:26:41 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:07.632 16:26:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.632 16:26:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:07.632 16:26:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.891 16:26:41 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:07.891 16:26:41 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:07.891 00:15:07.891 real 0m14.576s 00:15:07.891 user 0m19.994s 00:15:07.891 sys 0m5.753s 00:15:07.891 16:26:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:07.891 16:26:41 -- common/autotest_common.sh@10 -- # set +x 00:15:07.891 ************************************ 00:15:07.891 END TEST nvmf_fips 00:15:07.891 ************************************ 00:15:07.891 16:26:41 -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:15:07.891 16:26:41 -- nvmf/nvmf.sh@70 -- # [[ virt == phy ]] 00:15:07.891 16:26:41 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:15:07.891 16:26:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:07.891 16:26:41 -- common/autotest_common.sh@10 -- # set +x 00:15:07.891 16:26:41 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:15:07.891 16:26:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:07.891 16:26:41 -- common/autotest_common.sh@10 -- # set +x 00:15:07.891 16:26:41 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:15:07.891 16:26:41 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:15:07.891 16:26:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:07.891 16:26:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:07.891 16:26:41 -- common/autotest_common.sh@10 -- # set +x 00:15:07.891 ************************************ 00:15:07.891 START TEST nvmf_multicontroller 00:15:07.891 ************************************ 00:15:07.891 16:26:41 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:15:08.151 * Looking for test storage... 00:15:08.151 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:08.151 16:26:41 -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:08.151 16:26:41 -- nvmf/common.sh@7 -- # uname -s 00:15:08.151 16:26:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:08.151 16:26:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:08.151 16:26:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:08.151 16:26:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:08.151 16:26:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:08.151 16:26:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:08.151 16:26:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:08.151 16:26:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:08.151 16:26:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:08.151 16:26:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:08.151 16:26:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:15:08.151 16:26:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:15:08.151 16:26:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:08.151 16:26:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:08.151 16:26:41 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:08.151 16:26:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:08.151 16:26:41 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:08.151 16:26:41 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:08.151 16:26:41 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:08.151 16:26:41 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:08.151 16:26:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.151 16:26:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.151 16:26:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.151 16:26:41 -- paths/export.sh@5 -- # export PATH 00:15:08.151 16:26:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.151 16:26:41 -- nvmf/common.sh@47 -- # : 0 00:15:08.151 16:26:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:08.151 16:26:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:08.151 16:26:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:08.151 16:26:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:08.151 16:26:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:08.151 16:26:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:08.151 16:26:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:08.151 16:26:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:08.151 16:26:41 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:08.151 16:26:41 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:08.151 16:26:41 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:15:08.151 16:26:41 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:15:08.151 16:26:41 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:08.151 16:26:41 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:15:08.151 16:26:41 -- host/multicontroller.sh@23 -- # nvmftestinit 00:15:08.151 16:26:41 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:08.151 16:26:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:08.151 16:26:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:08.151 16:26:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:08.151 16:26:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:08.151 16:26:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:08.151 16:26:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:08.151 16:26:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:08.151 16:26:41 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:15:08.151 16:26:41 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:15:08.151 16:26:41 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:15:08.151 16:26:41 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:15:08.151 16:26:41 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:15:08.151 16:26:41 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:15:08.151 16:26:41 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:08.151 16:26:41 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:08.151 16:26:41 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:08.151 16:26:41 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:08.151 16:26:41 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:08.151 16:26:41 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:08.151 16:26:41 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:08.151 16:26:41 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:08.151 16:26:41 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:08.151 16:26:41 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:08.151 16:26:41 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:08.151 16:26:41 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:08.151 16:26:41 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:08.151 16:26:42 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:08.151 Cannot find device "nvmf_tgt_br" 00:15:08.151 16:26:42 -- nvmf/common.sh@155 -- # true 00:15:08.151 16:26:42 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:08.151 Cannot find device "nvmf_tgt_br2" 00:15:08.151 16:26:42 -- nvmf/common.sh@156 -- # true 00:15:08.151 16:26:42 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:08.151 16:26:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:08.151 Cannot find device "nvmf_tgt_br" 00:15:08.151 16:26:42 -- nvmf/common.sh@158 -- # true 00:15:08.151 16:26:42 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:08.151 Cannot find device "nvmf_tgt_br2" 00:15:08.151 16:26:42 -- nvmf/common.sh@159 -- # true 00:15:08.151 16:26:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:08.151 16:26:42 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:08.151 16:26:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:08.151 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:08.151 16:26:42 -- nvmf/common.sh@162 -- # true 00:15:08.151 16:26:42 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:08.151 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:08.151 16:26:42 -- nvmf/common.sh@163 -- # true 00:15:08.151 16:26:42 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:08.151 16:26:42 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:08.151 16:26:42 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:08.151 16:26:42 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:08.151 16:26:42 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:08.151 16:26:42 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:08.410 16:26:42 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:08.410 16:26:42 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:08.410 16:26:42 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:08.410 16:26:42 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:08.410 16:26:42 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:08.410 16:26:42 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:08.410 16:26:42 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:08.410 16:26:42 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:08.410 16:26:42 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:08.410 16:26:42 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:08.410 16:26:42 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:08.410 16:26:42 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:08.410 16:26:42 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:08.410 16:26:42 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:08.410 16:26:42 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:08.410 16:26:42 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:08.410 16:26:42 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:08.410 16:26:42 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:08.410 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:08.410 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:15:08.410 00:15:08.410 --- 10.0.0.2 ping statistics --- 00:15:08.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.410 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:15:08.410 16:26:42 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:08.410 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:08.410 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:15:08.410 00:15:08.410 --- 10.0.0.3 ping statistics --- 00:15:08.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.410 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:15:08.410 16:26:42 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:08.410 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:08.410 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:15:08.410 00:15:08.410 --- 10.0.0.1 ping statistics --- 00:15:08.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.410 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:15:08.410 16:26:42 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:08.410 16:26:42 -- nvmf/common.sh@422 -- # return 0 00:15:08.410 16:26:42 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:08.410 16:26:42 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:08.410 16:26:42 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:08.410 16:26:42 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:08.410 16:26:42 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:08.410 16:26:42 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:08.410 16:26:42 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:08.410 16:26:42 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:15:08.410 16:26:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:08.410 16:26:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:08.410 16:26:42 -- common/autotest_common.sh@10 -- # set +x 00:15:08.410 16:26:42 -- nvmf/common.sh@470 -- # nvmfpid=79478 00:15:08.410 16:26:42 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:08.410 16:26:42 -- nvmf/common.sh@471 -- # waitforlisten 79478 00:15:08.410 16:26:42 -- common/autotest_common.sh@817 -- # '[' -z 79478 ']' 00:15:08.410 16:26:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.410 16:26:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:08.410 16:26:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.410 16:26:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:08.410 16:26:42 -- common/autotest_common.sh@10 -- # set +x 00:15:08.410 [2024-04-17 16:26:42.439965] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:15:08.410 [2024-04-17 16:26:42.440057] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:08.669 [2024-04-17 16:26:42.581973] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:08.669 [2024-04-17 16:26:42.707180] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:08.669 [2024-04-17 16:26:42.707419] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:08.669 [2024-04-17 16:26:42.707603] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:08.669 [2024-04-17 16:26:42.707793] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:08.669 [2024-04-17 16:26:42.707954] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:08.669 [2024-04-17 16:26:42.708123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:08.669 [2024-04-17 16:26:42.708311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:08.669 [2024-04-17 16:26:42.708349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:09.604 16:26:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:09.604 16:26:43 -- common/autotest_common.sh@850 -- # return 0 00:15:09.604 16:26:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:09.604 16:26:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:09.604 16:26:43 -- common/autotest_common.sh@10 -- # set +x 00:15:09.604 16:26:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:09.604 16:26:43 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:09.604 16:26:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:09.604 16:26:43 -- common/autotest_common.sh@10 -- # set +x 00:15:09.604 [2024-04-17 16:26:43.493721] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:09.604 16:26:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:09.604 16:26:43 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:09.604 16:26:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:09.604 16:26:43 -- common/autotest_common.sh@10 -- # set +x 00:15:09.604 Malloc0 00:15:09.604 16:26:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:09.604 16:26:43 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:09.604 16:26:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:09.604 16:26:43 -- common/autotest_common.sh@10 -- # set +x 00:15:09.604 16:26:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:09.604 16:26:43 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:09.604 16:26:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:09.604 16:26:43 -- common/autotest_common.sh@10 -- # set +x 00:15:09.604 16:26:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:09.604 16:26:43 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:09.604 16:26:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:09.604 16:26:43 -- common/autotest_common.sh@10 -- # set +x 00:15:09.604 [2024-04-17 16:26:43.556727] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:09.604 16:26:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:09.604 16:26:43 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:09.604 16:26:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:09.604 16:26:43 -- common/autotest_common.sh@10 -- # set +x 00:15:09.604 [2024-04-17 16:26:43.564659] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:09.604 16:26:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:09.604 16:26:43 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:09.604 16:26:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:09.604 16:26:43 -- common/autotest_common.sh@10 -- # set +x 00:15:09.604 Malloc1 00:15:09.604 16:26:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:09.604 16:26:43 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:15:09.604 16:26:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:09.604 16:26:43 -- common/autotest_common.sh@10 -- # set +x 00:15:09.604 16:26:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:09.604 16:26:43 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:15:09.604 16:26:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:09.604 16:26:43 -- common/autotest_common.sh@10 -- # set +x 00:15:09.604 16:26:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:09.604 16:26:43 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:09.604 16:26:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:09.604 16:26:43 -- common/autotest_common.sh@10 -- # set +x 00:15:09.604 16:26:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:09.604 16:26:43 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:15:09.604 16:26:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:09.604 16:26:43 -- common/autotest_common.sh@10 -- # set +x 00:15:09.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:09.604 16:26:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:09.604 16:26:43 -- host/multicontroller.sh@44 -- # bdevperf_pid=79530 00:15:09.604 16:26:43 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:09.604 16:26:43 -- host/multicontroller.sh@47 -- # waitforlisten 79530 /var/tmp/bdevperf.sock 00:15:09.604 16:26:43 -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:15:09.604 16:26:43 -- common/autotest_common.sh@817 -- # '[' -z 79530 ']' 00:15:09.604 16:26:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:09.604 16:26:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:09.604 16:26:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:09.604 16:26:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:09.604 16:26:43 -- common/autotest_common.sh@10 -- # set +x 00:15:10.979 16:26:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:10.979 16:26:44 -- common/autotest_common.sh@850 -- # return 0 00:15:10.979 16:26:44 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:15:10.979 16:26:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:10.979 16:26:44 -- common/autotest_common.sh@10 -- # set +x 00:15:10.979 NVMe0n1 00:15:10.979 16:26:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:10.979 16:26:44 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:10.979 16:26:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:10.979 16:26:44 -- common/autotest_common.sh@10 -- # set +x 00:15:10.979 16:26:44 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:15:10.979 16:26:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:10.979 1 00:15:10.979 16:26:44 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:15:10.979 16:26:44 -- common/autotest_common.sh@638 -- # local es=0 00:15:10.979 16:26:44 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:15:10.979 16:26:44 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:15:10.979 16:26:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:10.979 16:26:44 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:15:10.979 16:26:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:10.979 16:26:44 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:15:10.979 16:26:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:10.979 16:26:44 -- common/autotest_common.sh@10 -- # set +x 00:15:10.979 2024/04/17 16:26:44 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:15:10.979 request: 00:15:10.979 { 00:15:10.979 "method": "bdev_nvme_attach_controller", 00:15:10.979 "params": { 00:15:10.979 "name": "NVMe0", 00:15:10.979 "trtype": "tcp", 00:15:10.979 "traddr": "10.0.0.2", 00:15:10.979 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:15:10.979 "hostaddr": "10.0.0.2", 00:15:10.979 "hostsvcid": "60000", 00:15:10.979 "adrfam": "ipv4", 00:15:10.979 "trsvcid": "4420", 00:15:10.979 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:15:10.979 } 00:15:10.979 } 00:15:10.979 Got JSON-RPC error response 00:15:10.979 GoRPCClient: error on JSON-RPC call 00:15:10.979 16:26:44 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:15:10.979 16:26:44 -- common/autotest_common.sh@641 -- # es=1 00:15:10.979 16:26:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:10.980 16:26:44 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:10.980 16:26:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:10.980 16:26:44 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:15:10.980 16:26:44 -- common/autotest_common.sh@638 -- # local es=0 00:15:10.980 16:26:44 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:15:10.980 16:26:44 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:15:10.980 16:26:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:10.980 16:26:44 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:15:10.980 16:26:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:10.980 16:26:44 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:15:10.980 16:26:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:10.980 16:26:44 -- common/autotest_common.sh@10 -- # set +x 00:15:10.980 2024/04/17 16:26:44 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:15:10.980 request: 00:15:10.980 { 00:15:10.980 "method": "bdev_nvme_attach_controller", 00:15:10.980 "params": { 00:15:10.980 "name": "NVMe0", 00:15:10.980 "trtype": "tcp", 00:15:10.980 "traddr": "10.0.0.2", 00:15:10.980 "hostaddr": "10.0.0.2", 00:15:10.980 "hostsvcid": "60000", 00:15:10.980 "adrfam": "ipv4", 00:15:10.980 "trsvcid": "4420", 00:15:10.980 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:15:10.980 } 00:15:10.980 } 00:15:10.980 Got JSON-RPC error response 00:15:10.980 GoRPCClient: error on JSON-RPC call 00:15:10.980 16:26:44 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:15:10.980 16:26:44 -- common/autotest_common.sh@641 -- # es=1 00:15:10.980 16:26:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:10.980 16:26:44 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:10.980 16:26:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:10.980 16:26:44 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:15:10.980 16:26:44 -- common/autotest_common.sh@638 -- # local es=0 00:15:10.980 16:26:44 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:15:10.980 16:26:44 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:15:10.980 16:26:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:10.980 16:26:44 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:15:10.980 16:26:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:10.980 16:26:44 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:15:10.980 16:26:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:10.980 16:26:44 -- common/autotest_common.sh@10 -- # set +x 00:15:10.980 2024/04/17 16:26:44 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:15:10.980 request: 00:15:10.980 { 00:15:10.980 "method": "bdev_nvme_attach_controller", 00:15:10.980 "params": { 00:15:10.980 "name": "NVMe0", 00:15:10.980 "trtype": "tcp", 00:15:10.980 "traddr": "10.0.0.2", 00:15:10.980 "hostaddr": "10.0.0.2", 00:15:10.980 "hostsvcid": "60000", 00:15:10.980 "adrfam": "ipv4", 00:15:10.980 "trsvcid": "4420", 00:15:10.980 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:10.980 "multipath": "disable" 00:15:10.980 } 00:15:10.980 } 00:15:10.980 Got JSON-RPC error response 00:15:10.980 GoRPCClient: error on JSON-RPC call 00:15:10.980 16:26:44 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:15:10.980 16:26:44 -- common/autotest_common.sh@641 -- # es=1 00:15:10.980 16:26:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:10.980 16:26:44 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:10.980 16:26:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:10.980 16:26:44 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:15:10.980 16:26:44 -- common/autotest_common.sh@638 -- # local es=0 00:15:10.980 16:26:44 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:15:10.980 16:26:44 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:15:10.980 16:26:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:10.980 16:26:44 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:15:10.980 16:26:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:10.980 16:26:44 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:15:10.980 16:26:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:10.980 16:26:44 -- common/autotest_common.sh@10 -- # set +x 00:15:10.980 2024/04/17 16:26:44 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:15:10.980 request: 00:15:10.980 { 00:15:10.980 "method": "bdev_nvme_attach_controller", 00:15:10.980 "params": { 00:15:10.980 "name": "NVMe0", 00:15:10.980 "trtype": "tcp", 00:15:10.980 "traddr": "10.0.0.2", 00:15:10.980 "hostaddr": "10.0.0.2", 00:15:10.980 "hostsvcid": "60000", 00:15:10.980 "adrfam": "ipv4", 00:15:10.980 "trsvcid": "4420", 00:15:10.980 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:10.980 "multipath": "failover" 00:15:10.980 } 00:15:10.980 } 00:15:10.980 Got JSON-RPC error response 00:15:10.980 GoRPCClient: error on JSON-RPC call 00:15:10.980 16:26:44 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:15:10.980 16:26:44 -- common/autotest_common.sh@641 -- # es=1 00:15:10.980 16:26:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:10.980 16:26:44 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:10.980 16:26:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:10.980 16:26:44 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:10.980 16:26:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:10.980 16:26:44 -- common/autotest_common.sh@10 -- # set +x 00:15:10.980 00:15:10.980 16:26:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:10.980 16:26:44 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:10.980 16:26:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:10.980 16:26:44 -- common/autotest_common.sh@10 -- # set +x 00:15:10.980 16:26:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:10.980 16:26:44 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:15:10.980 16:26:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:10.980 16:26:44 -- common/autotest_common.sh@10 -- # set +x 00:15:10.980 00:15:10.980 16:26:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:10.980 16:26:45 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:15:10.980 16:26:45 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:10.980 16:26:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:10.980 16:26:45 -- common/autotest_common.sh@10 -- # set +x 00:15:11.240 16:26:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:11.240 16:26:45 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:15:11.240 16:26:45 -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:12.175 0 00:15:12.175 16:26:46 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:15:12.175 16:26:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:12.175 16:26:46 -- common/autotest_common.sh@10 -- # set +x 00:15:12.175 16:26:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:12.175 16:26:46 -- host/multicontroller.sh@100 -- # killprocess 79530 00:15:12.175 16:26:46 -- common/autotest_common.sh@936 -- # '[' -z 79530 ']' 00:15:12.175 16:26:46 -- common/autotest_common.sh@940 -- # kill -0 79530 00:15:12.175 16:26:46 -- common/autotest_common.sh@941 -- # uname 00:15:12.175 16:26:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:12.175 16:26:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79530 00:15:12.434 killing process with pid 79530 00:15:12.434 16:26:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:12.434 16:26:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:12.434 16:26:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79530' 00:15:12.434 16:26:46 -- common/autotest_common.sh@955 -- # kill 79530 00:15:12.434 16:26:46 -- common/autotest_common.sh@960 -- # wait 79530 00:15:12.692 16:26:46 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:12.692 16:26:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:12.692 16:26:46 -- common/autotest_common.sh@10 -- # set +x 00:15:12.692 16:26:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:12.692 16:26:46 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:15:12.692 16:26:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:12.692 16:26:46 -- common/autotest_common.sh@10 -- # set +x 00:15:12.692 16:26:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:12.692 16:26:46 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:15:12.692 16:26:46 -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:12.692 16:26:46 -- common/autotest_common.sh@1598 -- # read -r file 00:15:12.692 16:26:46 -- common/autotest_common.sh@1597 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:15:12.692 16:26:46 -- common/autotest_common.sh@1597 -- # sort -u 00:15:12.692 16:26:46 -- common/autotest_common.sh@1599 -- # cat 00:15:12.692 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:15:12.692 [2024-04-17 16:26:43.677464] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:15:12.692 [2024-04-17 16:26:43.677576] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79530 ] 00:15:12.692 [2024-04-17 16:26:43.819068] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.692 [2024-04-17 16:26:43.947272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.692 [2024-04-17 16:26:45.012965] bdev.c:4548:bdev_name_add: *ERROR*: Bdev name a53b0a06-f13b-4c6e-b588-d3a95b5dbc55 already exists 00:15:12.693 [2024-04-17 16:26:45.013045] bdev.c:7651:bdev_register: *ERROR*: Unable to add uuid:a53b0a06-f13b-4c6e-b588-d3a95b5dbc55 alias for bdev NVMe1n1 00:15:12.693 [2024-04-17 16:26:45.013068] bdev_nvme.c:4264:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:15:12.693 Running I/O for 1 seconds... 00:15:12.693 00:15:12.693 Latency(us) 00:15:12.693 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.693 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:15:12.693 NVMe0n1 : 1.01 18746.17 73.23 0.00 0.00 6815.83 3991.74 13762.56 00:15:12.693 =================================================================================================================== 00:15:12.693 Total : 18746.17 73.23 0.00 0.00 6815.83 3991.74 13762.56 00:15:12.693 Received shutdown signal, test time was about 1.000000 seconds 00:15:12.693 00:15:12.693 Latency(us) 00:15:12.693 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.693 =================================================================================================================== 00:15:12.693 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:12.693 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:15:12.693 16:26:46 -- common/autotest_common.sh@1604 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:12.693 16:26:46 -- common/autotest_common.sh@1598 -- # read -r file 00:15:12.693 16:26:46 -- host/multicontroller.sh@108 -- # nvmftestfini 00:15:12.693 16:26:46 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:12.693 16:26:46 -- nvmf/common.sh@117 -- # sync 00:15:12.693 16:26:46 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:12.693 16:26:46 -- nvmf/common.sh@120 -- # set +e 00:15:12.693 16:26:46 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:12.693 16:26:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:12.693 rmmod nvme_tcp 00:15:12.693 rmmod nvme_fabrics 00:15:12.693 rmmod nvme_keyring 00:15:12.693 16:26:46 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:12.693 16:26:46 -- nvmf/common.sh@124 -- # set -e 00:15:12.693 16:26:46 -- nvmf/common.sh@125 -- # return 0 00:15:12.693 16:26:46 -- nvmf/common.sh@478 -- # '[' -n 79478 ']' 00:15:12.693 16:26:46 -- nvmf/common.sh@479 -- # killprocess 79478 00:15:12.693 16:26:46 -- common/autotest_common.sh@936 -- # '[' -z 79478 ']' 00:15:12.693 16:26:46 -- common/autotest_common.sh@940 -- # kill -0 79478 00:15:12.693 16:26:46 -- common/autotest_common.sh@941 -- # uname 00:15:12.693 16:26:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:12.693 16:26:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79478 00:15:12.693 killing process with pid 79478 00:15:12.693 16:26:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:12.693 16:26:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:12.693 16:26:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79478' 00:15:12.693 16:26:46 -- common/autotest_common.sh@955 -- # kill 79478 00:15:12.693 16:26:46 -- common/autotest_common.sh@960 -- # wait 79478 00:15:12.951 16:26:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:12.951 16:26:46 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:12.951 16:26:46 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:12.951 16:26:46 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:12.951 16:26:46 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:12.951 16:26:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.951 16:26:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:12.951 16:26:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:13.209 16:26:46 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:13.209 00:15:13.209 real 0m5.137s 00:15:13.209 user 0m15.969s 00:15:13.209 sys 0m1.123s 00:15:13.209 16:26:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:13.210 ************************************ 00:15:13.210 END TEST nvmf_multicontroller 00:15:13.210 ************************************ 00:15:13.210 16:26:47 -- common/autotest_common.sh@10 -- # set +x 00:15:13.210 16:26:47 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:15:13.210 16:26:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:13.210 16:26:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:13.210 16:26:47 -- common/autotest_common.sh@10 -- # set +x 00:15:13.210 ************************************ 00:15:13.210 START TEST nvmf_aer 00:15:13.210 ************************************ 00:15:13.210 16:26:47 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:15:13.210 * Looking for test storage... 00:15:13.210 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:13.210 16:26:47 -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:13.210 16:26:47 -- nvmf/common.sh@7 -- # uname -s 00:15:13.210 16:26:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:13.210 16:26:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:13.210 16:26:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:13.210 16:26:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:13.210 16:26:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:13.210 16:26:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:13.210 16:26:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:13.210 16:26:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:13.210 16:26:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:13.210 16:26:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:13.210 16:26:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:15:13.210 16:26:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:15:13.210 16:26:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:13.210 16:26:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:13.210 16:26:47 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:13.210 16:26:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:13.210 16:26:47 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:13.210 16:26:47 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:13.210 16:26:47 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:13.210 16:26:47 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:13.210 16:26:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.210 16:26:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.210 16:26:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.210 16:26:47 -- paths/export.sh@5 -- # export PATH 00:15:13.210 16:26:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.210 16:26:47 -- nvmf/common.sh@47 -- # : 0 00:15:13.210 16:26:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:13.210 16:26:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:13.210 16:26:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:13.210 16:26:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:13.210 16:26:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:13.210 16:26:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:13.210 16:26:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:13.210 16:26:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:13.210 16:26:47 -- host/aer.sh@11 -- # nvmftestinit 00:15:13.210 16:26:47 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:13.210 16:26:47 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:13.210 16:26:47 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:13.210 16:26:47 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:13.210 16:26:47 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:13.210 16:26:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:13.210 16:26:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:13.210 16:26:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:13.210 16:26:47 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:15:13.210 16:26:47 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:15:13.210 16:26:47 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:15:13.210 16:26:47 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:15:13.210 16:26:47 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:15:13.210 16:26:47 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:15:13.210 16:26:47 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:13.210 16:26:47 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:13.210 16:26:47 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:13.210 16:26:47 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:13.210 16:26:47 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:13.210 16:26:47 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:13.210 16:26:47 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:13.210 16:26:47 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:13.210 16:26:47 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:13.210 16:26:47 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:13.210 16:26:47 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:13.210 16:26:47 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:13.210 16:26:47 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:13.468 16:26:47 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:13.468 Cannot find device "nvmf_tgt_br" 00:15:13.468 16:26:47 -- nvmf/common.sh@155 -- # true 00:15:13.468 16:26:47 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:13.468 Cannot find device "nvmf_tgt_br2" 00:15:13.468 16:26:47 -- nvmf/common.sh@156 -- # true 00:15:13.468 16:26:47 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:13.468 16:26:47 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:13.468 Cannot find device "nvmf_tgt_br" 00:15:13.468 16:26:47 -- nvmf/common.sh@158 -- # true 00:15:13.468 16:26:47 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:13.468 Cannot find device "nvmf_tgt_br2" 00:15:13.468 16:26:47 -- nvmf/common.sh@159 -- # true 00:15:13.468 16:26:47 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:13.468 16:26:47 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:13.468 16:26:47 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:13.468 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:13.468 16:26:47 -- nvmf/common.sh@162 -- # true 00:15:13.468 16:26:47 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:13.468 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:13.468 16:26:47 -- nvmf/common.sh@163 -- # true 00:15:13.468 16:26:47 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:13.468 16:26:47 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:13.468 16:26:47 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:13.468 16:26:47 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:13.469 16:26:47 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:13.469 16:26:47 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:13.469 16:26:47 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:13.469 16:26:47 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:13.469 16:26:47 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:13.469 16:26:47 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:13.469 16:26:47 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:13.469 16:26:47 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:13.469 16:26:47 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:13.469 16:26:47 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:13.469 16:26:47 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:13.469 16:26:47 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:13.469 16:26:47 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:13.469 16:26:47 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:13.469 16:26:47 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:13.469 16:26:47 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:13.727 16:26:47 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:13.727 16:26:47 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:13.727 16:26:47 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:13.727 16:26:47 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:13.727 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:13.727 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:15:13.727 00:15:13.727 --- 10.0.0.2 ping statistics --- 00:15:13.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:13.727 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:15:13.727 16:26:47 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:13.727 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:13.727 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:15:13.727 00:15:13.727 --- 10.0.0.3 ping statistics --- 00:15:13.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:13.727 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:15:13.727 16:26:47 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:13.727 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:13.727 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:15:13.727 00:15:13.727 --- 10.0.0.1 ping statistics --- 00:15:13.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:13.727 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:15:13.727 16:26:47 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:13.727 16:26:47 -- nvmf/common.sh@422 -- # return 0 00:15:13.727 16:26:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:13.727 16:26:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:13.727 16:26:47 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:13.727 16:26:47 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:13.727 16:26:47 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:13.727 16:26:47 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:13.727 16:26:47 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:13.727 16:26:47 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:15:13.727 16:26:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:13.727 16:26:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:13.727 16:26:47 -- common/autotest_common.sh@10 -- # set +x 00:15:13.727 16:26:47 -- nvmf/common.sh@470 -- # nvmfpid=79795 00:15:13.727 16:26:47 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:13.728 16:26:47 -- nvmf/common.sh@471 -- # waitforlisten 79795 00:15:13.728 16:26:47 -- common/autotest_common.sh@817 -- # '[' -z 79795 ']' 00:15:13.728 16:26:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.728 16:26:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:13.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.728 16:26:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.728 16:26:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:13.728 16:26:47 -- common/autotest_common.sh@10 -- # set +x 00:15:13.728 [2024-04-17 16:26:47.650347] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:15:13.728 [2024-04-17 16:26:47.650509] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:13.986 [2024-04-17 16:26:47.802707] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:13.986 [2024-04-17 16:26:47.969450] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:13.986 [2024-04-17 16:26:47.969534] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:13.986 [2024-04-17 16:26:47.969548] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:13.986 [2024-04-17 16:26:47.969559] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:13.986 [2024-04-17 16:26:47.969568] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:13.986 [2024-04-17 16:26:47.969830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:13.986 [2024-04-17 16:26:47.969908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:13.986 [2024-04-17 16:26:47.970517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:13.986 [2024-04-17 16:26:47.970558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.959 16:26:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:14.959 16:26:48 -- common/autotest_common.sh@850 -- # return 0 00:15:14.959 16:26:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:14.959 16:26:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:14.959 16:26:48 -- common/autotest_common.sh@10 -- # set +x 00:15:14.959 16:26:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:14.959 16:26:48 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:14.959 16:26:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:14.959 16:26:48 -- common/autotest_common.sh@10 -- # set +x 00:15:14.959 [2024-04-17 16:26:48.755979] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:14.959 16:26:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:14.959 16:26:48 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:15:14.959 16:26:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:14.959 16:26:48 -- common/autotest_common.sh@10 -- # set +x 00:15:14.959 Malloc0 00:15:14.959 16:26:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:14.959 16:26:48 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:15:14.959 16:26:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:14.959 16:26:48 -- common/autotest_common.sh@10 -- # set +x 00:15:14.959 16:26:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:14.959 16:26:48 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:14.959 16:26:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:14.959 16:26:48 -- common/autotest_common.sh@10 -- # set +x 00:15:14.959 16:26:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:14.959 16:26:48 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:14.959 16:26:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:14.959 16:26:48 -- common/autotest_common.sh@10 -- # set +x 00:15:14.959 [2024-04-17 16:26:48.838811] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:14.959 16:26:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:14.959 16:26:48 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:15:14.959 16:26:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:14.959 16:26:48 -- common/autotest_common.sh@10 -- # set +x 00:15:14.959 [2024-04-17 16:26:48.846487] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:15:14.959 [ 00:15:14.959 { 00:15:14.959 "allow_any_host": true, 00:15:14.960 "hosts": [], 00:15:14.960 "listen_addresses": [], 00:15:14.960 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:14.960 "subtype": "Discovery" 00:15:14.960 }, 00:15:14.960 { 00:15:14.960 "allow_any_host": true, 00:15:14.960 "hosts": [], 00:15:14.960 "listen_addresses": [ 00:15:14.960 { 00:15:14.960 "adrfam": "IPv4", 00:15:14.960 "traddr": "10.0.0.2", 00:15:14.960 "transport": "TCP", 00:15:14.960 "trsvcid": "4420", 00:15:14.960 "trtype": "TCP" 00:15:14.960 } 00:15:14.960 ], 00:15:14.960 "max_cntlid": 65519, 00:15:14.960 "max_namespaces": 2, 00:15:14.960 "min_cntlid": 1, 00:15:14.960 "model_number": "SPDK bdev Controller", 00:15:14.960 "namespaces": [ 00:15:14.960 { 00:15:14.960 "bdev_name": "Malloc0", 00:15:14.960 "name": "Malloc0", 00:15:14.960 "nguid": "F5216CCB0AE245F9A0CA9E9FBE0530EE", 00:15:14.960 "nsid": 1, 00:15:14.960 "uuid": "f5216ccb-0ae2-45f9-a0ca-9e9fbe0530ee" 00:15:14.960 } 00:15:14.960 ], 00:15:14.960 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:14.960 "serial_number": "SPDK00000000000001", 00:15:14.960 "subtype": "NVMe" 00:15:14.960 } 00:15:14.960 ] 00:15:14.960 16:26:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:14.960 16:26:48 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:14.960 16:26:48 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:15:14.960 16:26:48 -- host/aer.sh@33 -- # aerpid=79849 00:15:14.960 16:26:48 -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:15:14.960 16:26:48 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:15:14.960 16:26:48 -- common/autotest_common.sh@1251 -- # local i=0 00:15:14.960 16:26:48 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:14.960 16:26:48 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:15:14.960 16:26:48 -- common/autotest_common.sh@1254 -- # i=1 00:15:14.960 16:26:48 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:15:14.960 16:26:48 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:14.960 16:26:48 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:15:14.960 16:26:48 -- common/autotest_common.sh@1254 -- # i=2 00:15:14.960 16:26:48 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:15:15.219 16:26:49 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:15.219 16:26:49 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:15.219 16:26:49 -- common/autotest_common.sh@1262 -- # return 0 00:15:15.219 16:26:49 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:15:15.219 16:26:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:15.219 16:26:49 -- common/autotest_common.sh@10 -- # set +x 00:15:15.219 Malloc1 00:15:15.219 16:26:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:15.219 16:26:49 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:15:15.219 16:26:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:15.219 16:26:49 -- common/autotest_common.sh@10 -- # set +x 00:15:15.219 16:26:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:15.219 16:26:49 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:15:15.219 16:26:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:15.219 16:26:49 -- common/autotest_common.sh@10 -- # set +x 00:15:15.219 Asynchronous Event Request test 00:15:15.219 Attaching to 10.0.0.2 00:15:15.219 Attached to 10.0.0.2 00:15:15.219 Registering asynchronous event callbacks... 00:15:15.219 Starting namespace attribute notice tests for all controllers... 00:15:15.219 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:15.219 aer_cb - Changed Namespace 00:15:15.219 Cleaning up... 00:15:15.219 [ 00:15:15.219 { 00:15:15.219 "allow_any_host": true, 00:15:15.219 "hosts": [], 00:15:15.219 "listen_addresses": [], 00:15:15.219 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:15.219 "subtype": "Discovery" 00:15:15.219 }, 00:15:15.219 { 00:15:15.219 "allow_any_host": true, 00:15:15.219 "hosts": [], 00:15:15.219 "listen_addresses": [ 00:15:15.219 { 00:15:15.219 "adrfam": "IPv4", 00:15:15.219 "traddr": "10.0.0.2", 00:15:15.219 "transport": "TCP", 00:15:15.219 "trsvcid": "4420", 00:15:15.219 "trtype": "TCP" 00:15:15.219 } 00:15:15.219 ], 00:15:15.219 "max_cntlid": 65519, 00:15:15.219 "max_namespaces": 2, 00:15:15.219 "min_cntlid": 1, 00:15:15.219 "model_number": "SPDK bdev Controller", 00:15:15.219 "namespaces": [ 00:15:15.219 { 00:15:15.219 "bdev_name": "Malloc0", 00:15:15.220 "name": "Malloc0", 00:15:15.220 "nguid": "F5216CCB0AE245F9A0CA9E9FBE0530EE", 00:15:15.220 "nsid": 1, 00:15:15.220 "uuid": "f5216ccb-0ae2-45f9-a0ca-9e9fbe0530ee" 00:15:15.220 }, 00:15:15.220 { 00:15:15.220 "bdev_name": "Malloc1", 00:15:15.220 "name": "Malloc1", 00:15:15.220 "nguid": "3181D9B15F324482B20FAE6BC247B1AF", 00:15:15.220 "nsid": 2, 00:15:15.220 "uuid": "3181d9b1-5f32-4482-b20f-ae6bc247b1af" 00:15:15.220 } 00:15:15.220 ], 00:15:15.220 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:15.220 "serial_number": "SPDK00000000000001", 00:15:15.220 "subtype": "NVMe" 00:15:15.220 } 00:15:15.220 ] 00:15:15.220 16:26:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:15.220 16:26:49 -- host/aer.sh@43 -- # wait 79849 00:15:15.220 16:26:49 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:15.220 16:26:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:15.220 16:26:49 -- common/autotest_common.sh@10 -- # set +x 00:15:15.220 16:26:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:15.220 16:26:49 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:15:15.220 16:26:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:15.220 16:26:49 -- common/autotest_common.sh@10 -- # set +x 00:15:15.478 16:26:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:15.478 16:26:49 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:15.478 16:26:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:15.478 16:26:49 -- common/autotest_common.sh@10 -- # set +x 00:15:15.478 16:26:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:15.478 16:26:49 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:15:15.478 16:26:49 -- host/aer.sh@51 -- # nvmftestfini 00:15:15.478 16:26:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:15.478 16:26:49 -- nvmf/common.sh@117 -- # sync 00:15:15.478 16:26:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:15.478 16:26:49 -- nvmf/common.sh@120 -- # set +e 00:15:15.478 16:26:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:15.478 16:26:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:15.478 rmmod nvme_tcp 00:15:15.478 rmmod nvme_fabrics 00:15:15.478 rmmod nvme_keyring 00:15:15.478 16:26:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:15.478 16:26:49 -- nvmf/common.sh@124 -- # set -e 00:15:15.478 16:26:49 -- nvmf/common.sh@125 -- # return 0 00:15:15.478 16:26:49 -- nvmf/common.sh@478 -- # '[' -n 79795 ']' 00:15:15.478 16:26:49 -- nvmf/common.sh@479 -- # killprocess 79795 00:15:15.478 16:26:49 -- common/autotest_common.sh@936 -- # '[' -z 79795 ']' 00:15:15.479 16:26:49 -- common/autotest_common.sh@940 -- # kill -0 79795 00:15:15.479 16:26:49 -- common/autotest_common.sh@941 -- # uname 00:15:15.479 16:26:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:15.479 16:26:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79795 00:15:15.479 16:26:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:15.479 16:26:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:15.479 killing process with pid 79795 00:15:15.479 16:26:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79795' 00:15:15.479 16:26:49 -- common/autotest_common.sh@955 -- # kill 79795 00:15:15.479 [2024-04-17 16:26:49.402092] app.c: 930:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:15:15.479 16:26:49 -- common/autotest_common.sh@960 -- # wait 79795 00:15:15.738 16:26:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:15.738 16:26:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:15.738 16:26:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:15.738 16:26:49 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:15.738 16:26:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:15.738 16:26:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:15.738 16:26:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:15.738 16:26:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:15.998 16:26:49 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:15.998 00:15:15.998 real 0m2.690s 00:15:15.998 user 0m7.142s 00:15:15.998 sys 0m0.809s 00:15:15.998 16:26:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:15.998 ************************************ 00:15:15.998 END TEST nvmf_aer 00:15:15.998 ************************************ 00:15:15.998 16:26:49 -- common/autotest_common.sh@10 -- # set +x 00:15:15.998 16:26:49 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:15:15.998 16:26:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:15.998 16:26:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:15.998 16:26:49 -- common/autotest_common.sh@10 -- # set +x 00:15:15.998 ************************************ 00:15:15.998 START TEST nvmf_async_init 00:15:15.998 ************************************ 00:15:15.998 16:26:49 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:15:15.998 * Looking for test storage... 00:15:15.998 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:15.998 16:26:50 -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:15.998 16:26:50 -- nvmf/common.sh@7 -- # uname -s 00:15:15.998 16:26:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:15.998 16:26:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:15.998 16:26:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:15.998 16:26:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:15.998 16:26:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:15.998 16:26:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:15.998 16:26:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:15.998 16:26:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:15.998 16:26:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:15.998 16:26:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:15.998 16:26:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:15:15.998 16:26:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:15:15.998 16:26:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:15.998 16:26:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:15.998 16:26:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:15.998 16:26:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:15.998 16:26:50 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:15.998 16:26:50 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:15.998 16:26:50 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:15.998 16:26:50 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:15.998 16:26:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.998 16:26:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.998 16:26:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.998 16:26:50 -- paths/export.sh@5 -- # export PATH 00:15:15.998 16:26:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.998 16:26:50 -- nvmf/common.sh@47 -- # : 0 00:15:15.998 16:26:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:15.998 16:26:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:15.998 16:26:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:15.998 16:26:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:15.998 16:26:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:15.998 16:26:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:15.998 16:26:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:15.998 16:26:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:15.998 16:26:50 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:15:15.998 16:26:50 -- host/async_init.sh@14 -- # null_block_size=512 00:15:15.998 16:26:50 -- host/async_init.sh@15 -- # null_bdev=null0 00:15:15.998 16:26:50 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:15:15.998 16:26:50 -- host/async_init.sh@20 -- # tr -d - 00:15:15.998 16:26:50 -- host/async_init.sh@20 -- # uuidgen 00:15:16.256 16:26:50 -- host/async_init.sh@20 -- # nguid=e68b2e18052b4d6db2874d4de37d724d 00:15:16.256 16:26:50 -- host/async_init.sh@22 -- # nvmftestinit 00:15:16.256 16:26:50 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:16.256 16:26:50 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:16.256 16:26:50 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:16.256 16:26:50 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:16.256 16:26:50 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:16.256 16:26:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.256 16:26:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:16.256 16:26:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.256 16:26:50 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:15:16.256 16:26:50 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:15:16.256 16:26:50 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:15:16.256 16:26:50 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:15:16.256 16:26:50 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:15:16.256 16:26:50 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:15:16.256 16:26:50 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:16.256 16:26:50 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:16.256 16:26:50 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:16.256 16:26:50 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:16.256 16:26:50 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:16.256 16:26:50 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:16.256 16:26:50 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:16.256 16:26:50 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:16.256 16:26:50 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:16.256 16:26:50 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:16.256 16:26:50 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:16.256 16:26:50 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:16.256 16:26:50 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:16.256 16:26:50 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:16.256 Cannot find device "nvmf_tgt_br" 00:15:16.256 16:26:50 -- nvmf/common.sh@155 -- # true 00:15:16.256 16:26:50 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:16.256 Cannot find device "nvmf_tgt_br2" 00:15:16.256 16:26:50 -- nvmf/common.sh@156 -- # true 00:15:16.256 16:26:50 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:16.256 16:26:50 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:16.256 Cannot find device "nvmf_tgt_br" 00:15:16.256 16:26:50 -- nvmf/common.sh@158 -- # true 00:15:16.256 16:26:50 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:16.256 Cannot find device "nvmf_tgt_br2" 00:15:16.256 16:26:50 -- nvmf/common.sh@159 -- # true 00:15:16.256 16:26:50 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:16.256 16:26:50 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:16.257 16:26:50 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:16.257 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:16.257 16:26:50 -- nvmf/common.sh@162 -- # true 00:15:16.257 16:26:50 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:16.257 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:16.257 16:26:50 -- nvmf/common.sh@163 -- # true 00:15:16.257 16:26:50 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:16.257 16:26:50 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:16.257 16:26:50 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:16.257 16:26:50 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:16.257 16:26:50 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:16.257 16:26:50 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:16.257 16:26:50 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:16.257 16:26:50 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:16.257 16:26:50 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:16.515 16:26:50 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:16.515 16:26:50 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:16.515 16:26:50 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:16.515 16:26:50 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:16.515 16:26:50 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:16.515 16:26:50 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:16.515 16:26:50 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:16.515 16:26:50 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:16.515 16:26:50 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:16.515 16:26:50 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:16.515 16:26:50 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:16.515 16:26:50 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:16.515 16:26:50 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:16.515 16:26:50 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:16.515 16:26:50 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:16.515 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:16.515 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:15:16.515 00:15:16.515 --- 10.0.0.2 ping statistics --- 00:15:16.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:16.515 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:15:16.515 16:26:50 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:16.515 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:16.515 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:15:16.515 00:15:16.515 --- 10.0.0.3 ping statistics --- 00:15:16.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:16.515 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:15:16.515 16:26:50 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:16.515 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:16.515 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:15:16.515 00:15:16.515 --- 10.0.0.1 ping statistics --- 00:15:16.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:16.515 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:15:16.515 16:26:50 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:16.515 16:26:50 -- nvmf/common.sh@422 -- # return 0 00:15:16.515 16:26:50 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:16.515 16:26:50 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:16.515 16:26:50 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:16.515 16:26:50 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:16.515 16:26:50 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:16.515 16:26:50 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:16.515 16:26:50 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:16.515 16:26:50 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:15:16.515 16:26:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:16.515 16:26:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:16.515 16:26:50 -- common/autotest_common.sh@10 -- # set +x 00:15:16.515 16:26:50 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:16.515 16:26:50 -- nvmf/common.sh@470 -- # nvmfpid=80031 00:15:16.515 16:26:50 -- nvmf/common.sh@471 -- # waitforlisten 80031 00:15:16.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:16.515 16:26:50 -- common/autotest_common.sh@817 -- # '[' -z 80031 ']' 00:15:16.515 16:26:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:16.515 16:26:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:16.515 16:26:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:16.515 16:26:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:16.515 16:26:50 -- common/autotest_common.sh@10 -- # set +x 00:15:16.515 [2024-04-17 16:26:50.500476] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:15:16.515 [2024-04-17 16:26:50.500842] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:16.774 [2024-04-17 16:26:50.642461] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.774 [2024-04-17 16:26:50.757748] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:16.774 [2024-04-17 16:26:50.758024] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:16.774 [2024-04-17 16:26:50.758046] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:16.774 [2024-04-17 16:26:50.758056] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:16.774 [2024-04-17 16:26:50.758063] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:16.774 [2024-04-17 16:26:50.758108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.708 16:26:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:17.708 16:26:51 -- common/autotest_common.sh@850 -- # return 0 00:15:17.708 16:26:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:17.708 16:26:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:17.708 16:26:51 -- common/autotest_common.sh@10 -- # set +x 00:15:17.708 16:26:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:17.708 16:26:51 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:15:17.708 16:26:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:17.708 16:26:51 -- common/autotest_common.sh@10 -- # set +x 00:15:17.708 [2024-04-17 16:26:51.516918] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:17.708 16:26:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:17.708 16:26:51 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:15:17.708 16:26:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:17.708 16:26:51 -- common/autotest_common.sh@10 -- # set +x 00:15:17.708 null0 00:15:17.708 16:26:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:17.708 16:26:51 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:15:17.708 16:26:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:17.709 16:26:51 -- common/autotest_common.sh@10 -- # set +x 00:15:17.709 16:26:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:17.709 16:26:51 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:15:17.709 16:26:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:17.709 16:26:51 -- common/autotest_common.sh@10 -- # set +x 00:15:17.709 16:26:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:17.709 16:26:51 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g e68b2e18052b4d6db2874d4de37d724d 00:15:17.709 16:26:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:17.709 16:26:51 -- common/autotest_common.sh@10 -- # set +x 00:15:17.709 16:26:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:17.709 16:26:51 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:17.709 16:26:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:17.709 16:26:51 -- common/autotest_common.sh@10 -- # set +x 00:15:17.709 [2024-04-17 16:26:51.557093] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:17.709 16:26:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:17.709 16:26:51 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:15:17.709 16:26:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:17.709 16:26:51 -- common/autotest_common.sh@10 -- # set +x 00:15:17.968 nvme0n1 00:15:17.968 16:26:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:17.968 16:26:51 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:15:17.968 16:26:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:17.968 16:26:51 -- common/autotest_common.sh@10 -- # set +x 00:15:17.968 [ 00:15:17.968 { 00:15:17.968 "aliases": [ 00:15:17.968 "e68b2e18-052b-4d6d-b287-4d4de37d724d" 00:15:17.968 ], 00:15:17.968 "assigned_rate_limits": { 00:15:17.968 "r_mbytes_per_sec": 0, 00:15:17.968 "rw_ios_per_sec": 0, 00:15:17.968 "rw_mbytes_per_sec": 0, 00:15:17.968 "w_mbytes_per_sec": 0 00:15:17.968 }, 00:15:17.968 "block_size": 512, 00:15:17.968 "claimed": false, 00:15:17.968 "driver_specific": { 00:15:17.968 "mp_policy": "active_passive", 00:15:17.968 "nvme": [ 00:15:17.968 { 00:15:17.968 "ctrlr_data": { 00:15:17.968 "ana_reporting": false, 00:15:17.968 "cntlid": 1, 00:15:17.968 "firmware_revision": "24.05", 00:15:17.968 "model_number": "SPDK bdev Controller", 00:15:17.968 "multi_ctrlr": true, 00:15:17.968 "oacs": { 00:15:17.968 "firmware": 0, 00:15:17.968 "format": 0, 00:15:17.968 "ns_manage": 0, 00:15:17.968 "security": 0 00:15:17.968 }, 00:15:17.968 "serial_number": "00000000000000000000", 00:15:17.968 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:17.968 "vendor_id": "0x8086" 00:15:17.968 }, 00:15:17.968 "ns_data": { 00:15:17.968 "can_share": true, 00:15:17.968 "id": 1 00:15:17.968 }, 00:15:17.968 "trid": { 00:15:17.968 "adrfam": "IPv4", 00:15:17.968 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:17.968 "traddr": "10.0.0.2", 00:15:17.968 "trsvcid": "4420", 00:15:17.968 "trtype": "TCP" 00:15:17.968 }, 00:15:17.968 "vs": { 00:15:17.968 "nvme_version": "1.3" 00:15:17.968 } 00:15:17.968 } 00:15:17.968 ] 00:15:17.968 }, 00:15:17.968 "memory_domains": [ 00:15:17.968 { 00:15:17.968 "dma_device_id": "system", 00:15:17.968 "dma_device_type": 1 00:15:17.968 } 00:15:17.968 ], 00:15:17.968 "name": "nvme0n1", 00:15:17.968 "num_blocks": 2097152, 00:15:17.968 "product_name": "NVMe disk", 00:15:17.968 "supported_io_types": { 00:15:17.968 "abort": true, 00:15:17.968 "compare": true, 00:15:17.968 "compare_and_write": true, 00:15:17.968 "flush": true, 00:15:17.968 "nvme_admin": true, 00:15:17.968 "nvme_io": true, 00:15:17.968 "read": true, 00:15:17.968 "reset": true, 00:15:17.968 "unmap": false, 00:15:17.968 "write": true, 00:15:17.968 "write_zeroes": true 00:15:17.968 }, 00:15:17.968 "uuid": "e68b2e18-052b-4d6d-b287-4d4de37d724d", 00:15:17.968 "zoned": false 00:15:17.968 } 00:15:17.968 ] 00:15:17.968 16:26:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:17.968 16:26:51 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:15:17.968 16:26:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:17.968 16:26:51 -- common/autotest_common.sh@10 -- # set +x 00:15:17.968 [2024-04-17 16:26:51.825403] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:17.968 [2024-04-17 16:26:51.825563] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10dd290 (9): Bad file descriptor 00:15:17.968 [2024-04-17 16:26:51.958038] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:17.968 16:26:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:17.968 16:26:51 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:15:17.968 16:26:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:17.968 16:26:51 -- common/autotest_common.sh@10 -- # set +x 00:15:17.968 [ 00:15:17.968 { 00:15:17.968 "aliases": [ 00:15:17.968 "e68b2e18-052b-4d6d-b287-4d4de37d724d" 00:15:17.968 ], 00:15:17.968 "assigned_rate_limits": { 00:15:17.968 "r_mbytes_per_sec": 0, 00:15:17.968 "rw_ios_per_sec": 0, 00:15:17.968 "rw_mbytes_per_sec": 0, 00:15:17.968 "w_mbytes_per_sec": 0 00:15:17.968 }, 00:15:17.968 "block_size": 512, 00:15:17.968 "claimed": false, 00:15:17.968 "driver_specific": { 00:15:17.968 "mp_policy": "active_passive", 00:15:17.968 "nvme": [ 00:15:17.968 { 00:15:17.968 "ctrlr_data": { 00:15:17.968 "ana_reporting": false, 00:15:17.968 "cntlid": 2, 00:15:17.968 "firmware_revision": "24.05", 00:15:17.968 "model_number": "SPDK bdev Controller", 00:15:17.968 "multi_ctrlr": true, 00:15:17.968 "oacs": { 00:15:17.968 "firmware": 0, 00:15:17.968 "format": 0, 00:15:17.968 "ns_manage": 0, 00:15:17.968 "security": 0 00:15:17.968 }, 00:15:17.968 "serial_number": "00000000000000000000", 00:15:17.968 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:17.968 "vendor_id": "0x8086" 00:15:17.968 }, 00:15:17.968 "ns_data": { 00:15:17.968 "can_share": true, 00:15:17.968 "id": 1 00:15:17.968 }, 00:15:17.968 "trid": { 00:15:17.968 "adrfam": "IPv4", 00:15:17.968 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:17.968 "traddr": "10.0.0.2", 00:15:17.968 "trsvcid": "4420", 00:15:17.968 "trtype": "TCP" 00:15:17.968 }, 00:15:17.968 "vs": { 00:15:17.968 "nvme_version": "1.3" 00:15:17.968 } 00:15:17.968 } 00:15:17.968 ] 00:15:17.968 }, 00:15:17.968 "memory_domains": [ 00:15:17.968 { 00:15:17.968 "dma_device_id": "system", 00:15:17.968 "dma_device_type": 1 00:15:17.968 } 00:15:17.968 ], 00:15:17.968 "name": "nvme0n1", 00:15:17.968 "num_blocks": 2097152, 00:15:17.968 "product_name": "NVMe disk", 00:15:17.968 "supported_io_types": { 00:15:17.968 "abort": true, 00:15:17.968 "compare": true, 00:15:17.968 "compare_and_write": true, 00:15:17.968 "flush": true, 00:15:17.968 "nvme_admin": true, 00:15:17.968 "nvme_io": true, 00:15:17.968 "read": true, 00:15:17.968 "reset": true, 00:15:17.968 "unmap": false, 00:15:17.968 "write": true, 00:15:17.968 "write_zeroes": true 00:15:17.968 }, 00:15:17.968 "uuid": "e68b2e18-052b-4d6d-b287-4d4de37d724d", 00:15:17.968 "zoned": false 00:15:17.968 } 00:15:17.968 ] 00:15:17.968 16:26:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:17.968 16:26:51 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:17.968 16:26:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:17.968 16:26:51 -- common/autotest_common.sh@10 -- # set +x 00:15:17.968 16:26:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:17.968 16:26:52 -- host/async_init.sh@53 -- # mktemp 00:15:18.227 16:26:52 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.LENQPWhsM4 00:15:18.227 16:26:52 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:18.227 16:26:52 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.LENQPWhsM4 00:15:18.227 16:26:52 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:15:18.227 16:26:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:18.227 16:26:52 -- common/autotest_common.sh@10 -- # set +x 00:15:18.227 16:26:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:18.227 16:26:52 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:15:18.227 16:26:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:18.227 16:26:52 -- common/autotest_common.sh@10 -- # set +x 00:15:18.227 [2024-04-17 16:26:52.033608] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:18.227 [2024-04-17 16:26:52.034077] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:18.227 16:26:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:18.227 16:26:52 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LENQPWhsM4 00:15:18.227 16:26:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:18.227 16:26:52 -- common/autotest_common.sh@10 -- # set +x 00:15:18.227 [2024-04-17 16:26:52.041584] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:18.227 16:26:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:18.227 16:26:52 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LENQPWhsM4 00:15:18.227 16:26:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:18.227 16:26:52 -- common/autotest_common.sh@10 -- # set +x 00:15:18.227 [2024-04-17 16:26:52.049590] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:18.227 [2024-04-17 16:26:52.049660] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:18.227 nvme0n1 00:15:18.227 16:26:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:18.227 16:26:52 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:15:18.227 16:26:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:18.227 16:26:52 -- common/autotest_common.sh@10 -- # set +x 00:15:18.227 [ 00:15:18.227 { 00:15:18.227 "aliases": [ 00:15:18.227 "e68b2e18-052b-4d6d-b287-4d4de37d724d" 00:15:18.227 ], 00:15:18.227 "assigned_rate_limits": { 00:15:18.227 "r_mbytes_per_sec": 0, 00:15:18.227 "rw_ios_per_sec": 0, 00:15:18.227 "rw_mbytes_per_sec": 0, 00:15:18.227 "w_mbytes_per_sec": 0 00:15:18.227 }, 00:15:18.227 "block_size": 512, 00:15:18.227 "claimed": false, 00:15:18.227 "driver_specific": { 00:15:18.227 "mp_policy": "active_passive", 00:15:18.227 "nvme": [ 00:15:18.227 { 00:15:18.227 "ctrlr_data": { 00:15:18.227 "ana_reporting": false, 00:15:18.227 "cntlid": 3, 00:15:18.227 "firmware_revision": "24.05", 00:15:18.227 "model_number": "SPDK bdev Controller", 00:15:18.227 "multi_ctrlr": true, 00:15:18.227 "oacs": { 00:15:18.227 "firmware": 0, 00:15:18.227 "format": 0, 00:15:18.227 "ns_manage": 0, 00:15:18.227 "security": 0 00:15:18.227 }, 00:15:18.227 "serial_number": "00000000000000000000", 00:15:18.227 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:18.227 "vendor_id": "0x8086" 00:15:18.227 }, 00:15:18.227 "ns_data": { 00:15:18.227 "can_share": true, 00:15:18.227 "id": 1 00:15:18.227 }, 00:15:18.227 "trid": { 00:15:18.227 "adrfam": "IPv4", 00:15:18.227 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:18.227 "traddr": "10.0.0.2", 00:15:18.227 "trsvcid": "4421", 00:15:18.227 "trtype": "TCP" 00:15:18.227 }, 00:15:18.227 "vs": { 00:15:18.227 "nvme_version": "1.3" 00:15:18.227 } 00:15:18.227 } 00:15:18.227 ] 00:15:18.227 }, 00:15:18.227 "memory_domains": [ 00:15:18.227 { 00:15:18.227 "dma_device_id": "system", 00:15:18.227 "dma_device_type": 1 00:15:18.227 } 00:15:18.227 ], 00:15:18.227 "name": "nvme0n1", 00:15:18.227 "num_blocks": 2097152, 00:15:18.227 "product_name": "NVMe disk", 00:15:18.227 "supported_io_types": { 00:15:18.227 "abort": true, 00:15:18.227 "compare": true, 00:15:18.227 "compare_and_write": true, 00:15:18.227 "flush": true, 00:15:18.227 "nvme_admin": true, 00:15:18.227 "nvme_io": true, 00:15:18.227 "read": true, 00:15:18.227 "reset": true, 00:15:18.227 "unmap": false, 00:15:18.227 "write": true, 00:15:18.227 "write_zeroes": true 00:15:18.227 }, 00:15:18.227 "uuid": "e68b2e18-052b-4d6d-b287-4d4de37d724d", 00:15:18.227 "zoned": false 00:15:18.227 } 00:15:18.227 ] 00:15:18.227 16:26:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:18.227 16:26:52 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:18.227 16:26:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:18.227 16:26:52 -- common/autotest_common.sh@10 -- # set +x 00:15:18.227 16:26:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:18.227 16:26:52 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.LENQPWhsM4 00:15:18.227 16:26:52 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:15:18.227 16:26:52 -- host/async_init.sh@78 -- # nvmftestfini 00:15:18.227 16:26:52 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:18.227 16:26:52 -- nvmf/common.sh@117 -- # sync 00:15:18.227 16:26:52 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:18.227 16:26:52 -- nvmf/common.sh@120 -- # set +e 00:15:18.227 16:26:52 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:18.227 16:26:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:18.227 rmmod nvme_tcp 00:15:18.227 rmmod nvme_fabrics 00:15:18.227 rmmod nvme_keyring 00:15:18.227 16:26:52 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:18.486 16:26:52 -- nvmf/common.sh@124 -- # set -e 00:15:18.486 16:26:52 -- nvmf/common.sh@125 -- # return 0 00:15:18.486 16:26:52 -- nvmf/common.sh@478 -- # '[' -n 80031 ']' 00:15:18.486 16:26:52 -- nvmf/common.sh@479 -- # killprocess 80031 00:15:18.486 16:26:52 -- common/autotest_common.sh@936 -- # '[' -z 80031 ']' 00:15:18.486 16:26:52 -- common/autotest_common.sh@940 -- # kill -0 80031 00:15:18.486 16:26:52 -- common/autotest_common.sh@941 -- # uname 00:15:18.486 16:26:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:18.486 16:26:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80031 00:15:18.486 killing process with pid 80031 00:15:18.486 16:26:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:18.486 16:26:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:18.486 16:26:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80031' 00:15:18.486 16:26:52 -- common/autotest_common.sh@955 -- # kill 80031 00:15:18.486 [2024-04-17 16:26:52.305198] app.c: 930:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:18.486 [2024-04-17 16:26:52.305259] app.c: 930:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:18.486 16:26:52 -- common/autotest_common.sh@960 -- # wait 80031 00:15:18.745 16:26:52 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:18.745 16:26:52 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:18.745 16:26:52 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:18.745 16:26:52 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:18.745 16:26:52 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:18.745 16:26:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:18.745 16:26:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:18.745 16:26:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:18.745 16:26:52 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:18.745 00:15:18.745 real 0m2.670s 00:15:18.745 user 0m2.449s 00:15:18.745 sys 0m0.630s 00:15:18.745 ************************************ 00:15:18.745 END TEST nvmf_async_init 00:15:18.745 ************************************ 00:15:18.745 16:26:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:18.745 16:26:52 -- common/autotest_common.sh@10 -- # set +x 00:15:18.745 16:26:52 -- nvmf/nvmf.sh@92 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:15:18.745 16:26:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:18.745 16:26:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:18.745 16:26:52 -- common/autotest_common.sh@10 -- # set +x 00:15:18.745 ************************************ 00:15:18.745 START TEST dma 00:15:18.745 ************************************ 00:15:18.745 16:26:52 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:15:19.004 * Looking for test storage... 00:15:19.004 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:19.004 16:26:52 -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:19.004 16:26:52 -- nvmf/common.sh@7 -- # uname -s 00:15:19.004 16:26:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:19.004 16:26:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:19.004 16:26:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:19.004 16:26:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:19.004 16:26:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:19.004 16:26:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:19.004 16:26:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:19.004 16:26:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:19.004 16:26:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:19.004 16:26:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:19.004 16:26:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:15:19.004 16:26:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:15:19.004 16:26:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:19.004 16:26:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:19.004 16:26:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:19.004 16:26:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:19.004 16:26:52 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:19.004 16:26:52 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:19.004 16:26:52 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:19.004 16:26:52 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:19.004 16:26:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.004 16:26:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.004 16:26:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.004 16:26:52 -- paths/export.sh@5 -- # export PATH 00:15:19.004 16:26:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.004 16:26:52 -- nvmf/common.sh@47 -- # : 0 00:15:19.004 16:26:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:19.004 16:26:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:19.004 16:26:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:19.004 16:26:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:19.004 16:26:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:19.004 16:26:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:19.004 16:26:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:19.004 16:26:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:19.004 16:26:52 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:15:19.004 16:26:52 -- host/dma.sh@13 -- # exit 0 00:15:19.004 00:15:19.004 real 0m0.114s 00:15:19.004 user 0m0.054s 00:15:19.004 sys 0m0.065s 00:15:19.004 ************************************ 00:15:19.004 END TEST dma 00:15:19.004 ************************************ 00:15:19.004 16:26:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:19.004 16:26:52 -- common/autotest_common.sh@10 -- # set +x 00:15:19.004 16:26:52 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:19.004 16:26:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:19.004 16:26:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:19.004 16:26:52 -- common/autotest_common.sh@10 -- # set +x 00:15:19.004 ************************************ 00:15:19.004 START TEST nvmf_identify 00:15:19.004 ************************************ 00:15:19.004 16:26:52 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:19.004 * Looking for test storage... 00:15:19.004 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:19.004 16:26:53 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:19.004 16:26:53 -- nvmf/common.sh@7 -- # uname -s 00:15:19.004 16:26:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:19.004 16:26:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:19.004 16:26:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:19.004 16:26:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:19.004 16:26:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:19.004 16:26:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:19.004 16:26:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:19.004 16:26:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:19.004 16:26:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:19.004 16:26:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:19.004 16:26:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:15:19.004 16:26:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:15:19.004 16:26:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:19.004 16:26:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:19.004 16:26:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:19.004 16:26:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:19.263 16:26:53 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:19.263 16:26:53 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:19.263 16:26:53 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:19.263 16:26:53 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:19.263 16:26:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.263 16:26:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.263 16:26:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.263 16:26:53 -- paths/export.sh@5 -- # export PATH 00:15:19.263 16:26:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.263 16:26:53 -- nvmf/common.sh@47 -- # : 0 00:15:19.263 16:26:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:19.263 16:26:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:19.263 16:26:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:19.263 16:26:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:19.263 16:26:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:19.263 16:26:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:19.263 16:26:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:19.263 16:26:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:19.263 16:26:53 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:19.264 16:26:53 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:19.264 16:26:53 -- host/identify.sh@14 -- # nvmftestinit 00:15:19.264 16:26:53 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:19.264 16:26:53 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:19.264 16:26:53 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:19.264 16:26:53 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:19.264 16:26:53 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:19.264 16:26:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:19.264 16:26:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:19.264 16:26:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:19.264 16:26:53 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:15:19.264 16:26:53 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:15:19.264 16:26:53 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:15:19.264 16:26:53 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:15:19.264 16:26:53 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:15:19.264 16:26:53 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:15:19.264 16:26:53 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:19.264 16:26:53 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:19.264 16:26:53 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:19.264 16:26:53 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:19.264 16:26:53 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:19.264 16:26:53 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:19.264 16:26:53 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:19.264 16:26:53 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:19.264 16:26:53 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:19.264 16:26:53 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:19.264 16:26:53 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:19.264 16:26:53 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:19.264 16:26:53 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:19.264 16:26:53 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:19.264 Cannot find device "nvmf_tgt_br" 00:15:19.264 16:26:53 -- nvmf/common.sh@155 -- # true 00:15:19.264 16:26:53 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:19.264 Cannot find device "nvmf_tgt_br2" 00:15:19.264 16:26:53 -- nvmf/common.sh@156 -- # true 00:15:19.264 16:26:53 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:19.264 16:26:53 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:19.264 Cannot find device "nvmf_tgt_br" 00:15:19.264 16:26:53 -- nvmf/common.sh@158 -- # true 00:15:19.264 16:26:53 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:19.264 Cannot find device "nvmf_tgt_br2" 00:15:19.264 16:26:53 -- nvmf/common.sh@159 -- # true 00:15:19.264 16:26:53 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:19.264 16:26:53 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:19.264 16:26:53 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:19.264 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:19.264 16:26:53 -- nvmf/common.sh@162 -- # true 00:15:19.264 16:26:53 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:19.264 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:19.264 16:26:53 -- nvmf/common.sh@163 -- # true 00:15:19.264 16:26:53 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:19.264 16:26:53 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:19.264 16:26:53 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:19.264 16:26:53 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:19.264 16:26:53 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:19.264 16:26:53 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:19.264 16:26:53 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:19.264 16:26:53 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:19.264 16:26:53 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:19.264 16:26:53 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:19.264 16:26:53 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:19.264 16:26:53 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:19.264 16:26:53 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:19.264 16:26:53 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:19.264 16:26:53 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:19.264 16:26:53 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:19.264 16:26:53 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:19.264 16:26:53 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:19.522 16:26:53 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:19.522 16:26:53 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:19.522 16:26:53 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:19.522 16:26:53 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:19.522 16:26:53 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:19.522 16:26:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:19.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:19.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:15:19.522 00:15:19.522 --- 10.0.0.2 ping statistics --- 00:15:19.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.522 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:15:19.522 16:26:53 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:19.522 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:19.522 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:15:19.522 00:15:19.522 --- 10.0.0.3 ping statistics --- 00:15:19.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.522 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:15:19.522 16:26:53 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:19.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:19.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:15:19.522 00:15:19.522 --- 10.0.0.1 ping statistics --- 00:15:19.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.522 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:15:19.522 16:26:53 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:19.522 16:26:53 -- nvmf/common.sh@422 -- # return 0 00:15:19.522 16:26:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:19.522 16:26:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:19.522 16:26:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:19.522 16:26:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:19.522 16:26:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:19.522 16:26:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:19.522 16:26:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:19.522 16:26:53 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:15:19.522 16:26:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:19.522 16:26:53 -- common/autotest_common.sh@10 -- # set +x 00:15:19.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.522 16:26:53 -- host/identify.sh@19 -- # nvmfpid=80314 00:15:19.522 16:26:53 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:19.522 16:26:53 -- host/identify.sh@23 -- # waitforlisten 80314 00:15:19.522 16:26:53 -- common/autotest_common.sh@817 -- # '[' -z 80314 ']' 00:15:19.522 16:26:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.522 16:26:53 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:19.522 16:26:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:19.522 16:26:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.522 16:26:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:19.523 16:26:53 -- common/autotest_common.sh@10 -- # set +x 00:15:19.523 [2024-04-17 16:26:53.468369] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:15:19.523 [2024-04-17 16:26:53.468496] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:19.780 [2024-04-17 16:26:53.613460] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:19.780 [2024-04-17 16:26:53.740338] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:19.780 [2024-04-17 16:26:53.740625] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:19.780 [2024-04-17 16:26:53.740940] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:19.780 [2024-04-17 16:26:53.741096] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:19.780 [2024-04-17 16:26:53.741203] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:19.780 [2024-04-17 16:26:53.741412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:19.780 [2024-04-17 16:26:53.741563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:19.780 [2024-04-17 16:26:53.741916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:19.780 [2024-04-17 16:26:53.741969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.716 16:26:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:20.716 16:26:54 -- common/autotest_common.sh@850 -- # return 0 00:15:20.716 16:26:54 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:20.716 16:26:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:20.716 16:26:54 -- common/autotest_common.sh@10 -- # set +x 00:15:20.716 [2024-04-17 16:26:54.478801] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:20.716 16:26:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:20.716 16:26:54 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:15:20.716 16:26:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:20.716 16:26:54 -- common/autotest_common.sh@10 -- # set +x 00:15:20.716 16:26:54 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:20.716 16:26:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:20.716 16:26:54 -- common/autotest_common.sh@10 -- # set +x 00:15:20.716 Malloc0 00:15:20.716 16:26:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:20.716 16:26:54 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:20.716 16:26:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:20.716 16:26:54 -- common/autotest_common.sh@10 -- # set +x 00:15:20.716 16:26:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:20.716 16:26:54 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:15:20.716 16:26:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:20.716 16:26:54 -- common/autotest_common.sh@10 -- # set +x 00:15:20.716 16:26:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:20.716 16:26:54 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:20.716 16:26:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:20.716 16:26:54 -- common/autotest_common.sh@10 -- # set +x 00:15:20.716 [2024-04-17 16:26:54.600261] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:20.716 16:26:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:20.716 16:26:54 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:20.716 16:26:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:20.716 16:26:54 -- common/autotest_common.sh@10 -- # set +x 00:15:20.716 16:26:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:20.716 16:26:54 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:15:20.716 16:26:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:20.716 16:26:54 -- common/autotest_common.sh@10 -- # set +x 00:15:20.716 [2024-04-17 16:26:54.615992] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:15:20.716 [ 00:15:20.716 { 00:15:20.716 "allow_any_host": true, 00:15:20.716 "hosts": [], 00:15:20.716 "listen_addresses": [ 00:15:20.716 { 00:15:20.716 "adrfam": "IPv4", 00:15:20.716 "traddr": "10.0.0.2", 00:15:20.716 "transport": "TCP", 00:15:20.716 "trsvcid": "4420", 00:15:20.716 "trtype": "TCP" 00:15:20.716 } 00:15:20.716 ], 00:15:20.716 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:20.716 "subtype": "Discovery" 00:15:20.716 }, 00:15:20.716 { 00:15:20.716 "allow_any_host": true, 00:15:20.716 "hosts": [], 00:15:20.716 "listen_addresses": [ 00:15:20.716 { 00:15:20.716 "adrfam": "IPv4", 00:15:20.716 "traddr": "10.0.0.2", 00:15:20.716 "transport": "TCP", 00:15:20.716 "trsvcid": "4420", 00:15:20.716 "trtype": "TCP" 00:15:20.716 } 00:15:20.716 ], 00:15:20.716 "max_cntlid": 65519, 00:15:20.716 "max_namespaces": 32, 00:15:20.716 "min_cntlid": 1, 00:15:20.716 "model_number": "SPDK bdev Controller", 00:15:20.716 "namespaces": [ 00:15:20.716 { 00:15:20.716 "bdev_name": "Malloc0", 00:15:20.716 "eui64": "ABCDEF0123456789", 00:15:20.716 "name": "Malloc0", 00:15:20.716 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:15:20.716 "nsid": 1, 00:15:20.716 "uuid": "92372d18-a592-4470-bd88-eb48574144c2" 00:15:20.716 } 00:15:20.716 ], 00:15:20.716 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:20.716 "serial_number": "SPDK00000000000001", 00:15:20.716 "subtype": "NVMe" 00:15:20.716 } 00:15:20.716 ] 00:15:20.716 16:26:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:20.716 16:26:54 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:15:20.716 [2024-04-17 16:26:54.654427] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:15:20.716 [2024-04-17 16:26:54.654726] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80368 ] 00:15:20.977 [2024-04-17 16:26:54.793113] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:15:20.977 [2024-04-17 16:26:54.793198] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:20.977 [2024-04-17 16:26:54.793206] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:20.977 [2024-04-17 16:26:54.793222] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:20.977 [2024-04-17 16:26:54.793235] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:20.977 [2024-04-17 16:26:54.793411] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:15:20.977 [2024-04-17 16:26:54.793464] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1ee8300 0 00:15:20.977 [2024-04-17 16:26:54.797793] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:20.977 [2024-04-17 16:26:54.797819] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:20.977 [2024-04-17 16:26:54.797826] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:20.977 [2024-04-17 16:26:54.797830] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:20.977 [2024-04-17 16:26:54.797881] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.977 [2024-04-17 16:26:54.797889] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.977 [2024-04-17 16:26:54.797893] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ee8300) 00:15:20.977 [2024-04-17 16:26:54.797909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:20.977 [2024-04-17 16:26:54.797943] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f309c0, cid 0, qid 0 00:15:20.977 [2024-04-17 16:26:54.805800] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.977 [2024-04-17 16:26:54.805830] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.977 [2024-04-17 16:26:54.805836] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.977 [2024-04-17 16:26:54.805842] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f309c0) on tqpair=0x1ee8300 00:15:20.977 [2024-04-17 16:26:54.805858] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:20.977 [2024-04-17 16:26:54.805868] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:15:20.977 [2024-04-17 16:26:54.805874] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:15:20.977 [2024-04-17 16:26:54.805896] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.977 [2024-04-17 16:26:54.805902] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.977 [2024-04-17 16:26:54.805907] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ee8300) 00:15:20.977 [2024-04-17 16:26:54.805925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.977 [2024-04-17 16:26:54.805958] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f309c0, cid 0, qid 0 00:15:20.977 [2024-04-17 16:26:54.806041] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.977 [2024-04-17 16:26:54.806049] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.977 [2024-04-17 16:26:54.806053] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.977 [2024-04-17 16:26:54.806057] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f309c0) on tqpair=0x1ee8300 00:15:20.977 [2024-04-17 16:26:54.806069] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:15:20.977 [2024-04-17 16:26:54.806078] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:15:20.978 [2024-04-17 16:26:54.806087] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.978 [2024-04-17 16:26:54.806091] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.978 [2024-04-17 16:26:54.806096] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ee8300) 00:15:20.978 [2024-04-17 16:26:54.806104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.978 [2024-04-17 16:26:54.806126] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f309c0, cid 0, qid 0 00:15:20.978 [2024-04-17 16:26:54.806189] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.978 [2024-04-17 16:26:54.806196] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.978 [2024-04-17 16:26:54.806200] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.978 [2024-04-17 16:26:54.806204] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f309c0) on tqpair=0x1ee8300 00:15:20.978 [2024-04-17 16:26:54.806212] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:15:20.978 [2024-04-17 16:26:54.806221] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:15:20.978 [2024-04-17 16:26:54.806229] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.978 [2024-04-17 16:26:54.806234] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.978 [2024-04-17 16:26:54.806238] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ee8300) 00:15:20.978 [2024-04-17 16:26:54.806246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.978 [2024-04-17 16:26:54.806265] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f309c0, cid 0, qid 0 00:15:20.978 [2024-04-17 16:26:54.806323] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.978 [2024-04-17 16:26:54.806330] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.978 [2024-04-17 16:26:54.806334] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.978 [2024-04-17 16:26:54.806344] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f309c0) on tqpair=0x1ee8300 00:15:20.978 [2024-04-17 16:26:54.806351] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:20.978 [2024-04-17 16:26:54.806362] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.978 [2024-04-17 16:26:54.806367] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.978 [2024-04-17 16:26:54.806371] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ee8300) 00:15:20.978 [2024-04-17 16:26:54.806379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.978 [2024-04-17 16:26:54.806397] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f309c0, cid 0, qid 0 00:15:20.978 [2024-04-17 16:26:54.806456] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.978 [2024-04-17 16:26:54.806463] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.978 [2024-04-17 16:26:54.806466] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.978 [2024-04-17 16:26:54.806471] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f309c0) on tqpair=0x1ee8300 00:15:20.978 [2024-04-17 16:26:54.806477] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:15:20.978 [2024-04-17 16:26:54.806483] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:15:20.978 [2024-04-17 16:26:54.806491] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:20.978 [2024-04-17 16:26:54.806597] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:15:20.978 [2024-04-17 16:26:54.806612] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:20.978 [2024-04-17 16:26:54.806623] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.978 [2024-04-17 16:26:54.806628] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.978 [2024-04-17 16:26:54.806632] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ee8300) 00:15:20.978 [2024-04-17 16:26:54.806640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.978 [2024-04-17 16:26:54.806661] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f309c0, cid 0, qid 0 00:15:20.978 [2024-04-17 16:26:54.806720] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.978 [2024-04-17 16:26:54.806727] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.978 [2024-04-17 16:26:54.806731] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.978 [2024-04-17 16:26:54.806735] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f309c0) on tqpair=0x1ee8300 00:15:20.978 [2024-04-17 16:26:54.806742] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:20.978 [2024-04-17 16:26:54.806753] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.978 [2024-04-17 16:26:54.806757] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.978 [2024-04-17 16:26:54.806761] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ee8300) 00:15:20.978 [2024-04-17 16:26:54.806769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.978 [2024-04-17 16:26:54.806805] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f309c0, cid 0, qid 0 00:15:20.978 [2024-04-17 16:26:54.806866] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.978 [2024-04-17 16:26:54.806880] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.978 [2024-04-17 16:26:54.806884] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.978 [2024-04-17 16:26:54.806889] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f309c0) on tqpair=0x1ee8300 00:15:20.978 [2024-04-17 16:26:54.806895] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:20.978 [2024-04-17 16:26:54.806900] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:15:20.978 [2024-04-17 16:26:54.806909] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:15:20.978 [2024-04-17 16:26:54.806920] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:15:20.978 [2024-04-17 16:26:54.806932] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.978 [2024-04-17 16:26:54.806936] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ee8300) 00:15:20.978 [2024-04-17 16:26:54.806945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.978 [2024-04-17 16:26:54.806965] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f309c0, cid 0, qid 0 00:15:20.978 [2024-04-17 16:26:54.807070] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:20.978 [2024-04-17 16:26:54.807077] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:20.978 [2024-04-17 16:26:54.807081] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:20.978 [2024-04-17 16:26:54.807086] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ee8300): datao=0, datal=4096, cccid=0 00:15:20.978 [2024-04-17 16:26:54.807091] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f309c0) on tqpair(0x1ee8300): expected_datao=0, payload_size=4096 00:15:20.978 [2024-04-17 16:26:54.807097] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.978 [2024-04-17 16:26:54.807106] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:20.978 [2024-04-17 16:26:54.807111] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:20.978 [2024-04-17 16:26:54.807121] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.978 [2024-04-17 16:26:54.807127] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.978 [2024-04-17 16:26:54.807131] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.978 [2024-04-17 16:26:54.807135] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f309c0) on tqpair=0x1ee8300 00:15:20.978 [2024-04-17 16:26:54.807147] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:15:20.978 [2024-04-17 16:26:54.807152] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:15:20.978 [2024-04-17 16:26:54.807157] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:15:20.978 [2024-04-17 16:26:54.807168] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:15:20.978 [2024-04-17 16:26:54.807174] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:15:20.978 [2024-04-17 16:26:54.807179] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:15:20.978 [2024-04-17 16:26:54.807189] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:15:20.978 [2024-04-17 16:26:54.807198] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.978 [2024-04-17 16:26:54.807202] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.978 [2024-04-17 16:26:54.807206] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ee8300) 00:15:20.978 [2024-04-17 16:26:54.807215] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:20.978 [2024-04-17 16:26:54.807236] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f309c0, cid 0, qid 0 00:15:20.978 [2024-04-17 16:26:54.807308] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.978 [2024-04-17 16:26:54.807320] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.978 [2024-04-17 16:26:54.807325] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.978 [2024-04-17 16:26:54.807330] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f309c0) on tqpair=0x1ee8300 00:15:20.978 [2024-04-17 16:26:54.807340] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.978 [2024-04-17 16:26:54.807344] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.978 [2024-04-17 16:26:54.807348] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ee8300) 00:15:20.978 [2024-04-17 16:26:54.807356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.978 [2024-04-17 16:26:54.807363] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.978 [2024-04-17 16:26:54.807368] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.978 [2024-04-17 16:26:54.807372] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1ee8300) 00:15:20.979 [2024-04-17 16:26:54.807378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.979 [2024-04-17 16:26:54.807385] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.979 [2024-04-17 16:26:54.807389] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.979 [2024-04-17 16:26:54.807393] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1ee8300) 00:15:20.979 [2024-04-17 16:26:54.807400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.979 [2024-04-17 16:26:54.807406] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.979 [2024-04-17 16:26:54.807411] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.979 [2024-04-17 16:26:54.807415] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ee8300) 00:15:20.979 [2024-04-17 16:26:54.807421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.979 [2024-04-17 16:26:54.807426] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:15:20.979 [2024-04-17 16:26:54.807440] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:20.979 [2024-04-17 16:26:54.807449] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.979 [2024-04-17 16:26:54.807453] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ee8300) 00:15:20.979 [2024-04-17 16:26:54.807461] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.979 [2024-04-17 16:26:54.807484] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f309c0, cid 0, qid 0 00:15:20.979 [2024-04-17 16:26:54.807491] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30b20, cid 1, qid 0 00:15:20.979 [2024-04-17 16:26:54.807497] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30c80, cid 2, qid 0 00:15:20.979 [2024-04-17 16:26:54.807502] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30de0, cid 3, qid 0 00:15:20.979 [2024-04-17 16:26:54.807507] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30f40, cid 4, qid 0 00:15:20.979 [2024-04-17 16:26:54.807612] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.979 [2024-04-17 16:26:54.807621] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.979 [2024-04-17 16:26:54.807625] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.979 [2024-04-17 16:26:54.807629] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f30f40) on tqpair=0x1ee8300 00:15:20.979 [2024-04-17 16:26:54.807647] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:15:20.979 [2024-04-17 16:26:54.807653] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:15:20.979 [2024-04-17 16:26:54.807665] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.979 [2024-04-17 16:26:54.807670] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ee8300) 00:15:20.979 [2024-04-17 16:26:54.807678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.979 [2024-04-17 16:26:54.807698] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30f40, cid 4, qid 0 00:15:20.979 [2024-04-17 16:26:54.807782] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:20.979 [2024-04-17 16:26:54.807798] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:20.979 [2024-04-17 16:26:54.807803] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:20.979 [2024-04-17 16:26:54.807808] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ee8300): datao=0, datal=4096, cccid=4 00:15:20.979 [2024-04-17 16:26:54.807813] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f30f40) on tqpair(0x1ee8300): expected_datao=0, payload_size=4096 00:15:20.979 [2024-04-17 16:26:54.807818] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.979 [2024-04-17 16:26:54.807826] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:20.979 [2024-04-17 16:26:54.807831] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:20.979 [2024-04-17 16:26:54.807840] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.979 [2024-04-17 16:26:54.807847] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.979 [2024-04-17 16:26:54.807851] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.979 [2024-04-17 16:26:54.807855] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f30f40) on tqpair=0x1ee8300 00:15:20.979 [2024-04-17 16:26:54.807873] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:15:20.979 [2024-04-17 16:26:54.807899] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.979 [2024-04-17 16:26:54.807905] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ee8300) 00:15:20.979 [2024-04-17 16:26:54.807913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.979 [2024-04-17 16:26:54.807921] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.979 [2024-04-17 16:26:54.807926] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.979 [2024-04-17 16:26:54.807930] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ee8300) 00:15:20.979 [2024-04-17 16:26:54.807936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.979 [2024-04-17 16:26:54.807967] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30f40, cid 4, qid 0 00:15:20.979 [2024-04-17 16:26:54.807975] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f310a0, cid 5, qid 0 00:15:20.979 [2024-04-17 16:26:54.808084] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:20.979 [2024-04-17 16:26:54.808100] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:20.979 [2024-04-17 16:26:54.808105] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:20.979 [2024-04-17 16:26:54.808109] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ee8300): datao=0, datal=1024, cccid=4 00:15:20.979 [2024-04-17 16:26:54.808114] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f30f40) on tqpair(0x1ee8300): expected_datao=0, payload_size=1024 00:15:20.979 [2024-04-17 16:26:54.808119] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.979 [2024-04-17 16:26:54.808127] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:20.979 [2024-04-17 16:26:54.808131] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:20.979 [2024-04-17 16:26:54.808137] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.979 [2024-04-17 16:26:54.808143] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.979 [2024-04-17 16:26:54.808147] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.979 [2024-04-17 16:26:54.808152] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f310a0) on tqpair=0x1ee8300 00:15:20.979 [2024-04-17 16:26:54.848916] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.979 [2024-04-17 16:26:54.848966] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.979 [2024-04-17 16:26:54.848973] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.979 [2024-04-17 16:26:54.848979] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f30f40) on tqpair=0x1ee8300 00:15:20.979 [2024-04-17 16:26:54.849030] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.979 [2024-04-17 16:26:54.849038] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ee8300) 00:15:20.979 [2024-04-17 16:26:54.849054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.979 [2024-04-17 16:26:54.849099] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30f40, cid 4, qid 0 00:15:20.979 [2024-04-17 16:26:54.849239] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:20.979 [2024-04-17 16:26:54.849246] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:20.979 [2024-04-17 16:26:54.849250] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:20.979 [2024-04-17 16:26:54.849255] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ee8300): datao=0, datal=3072, cccid=4 00:15:20.979 [2024-04-17 16:26:54.849260] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f30f40) on tqpair(0x1ee8300): expected_datao=0, payload_size=3072 00:15:20.979 [2024-04-17 16:26:54.849266] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.979 [2024-04-17 16:26:54.849276] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:20.979 [2024-04-17 16:26:54.849281] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:20.979 [2024-04-17 16:26:54.849290] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.979 [2024-04-17 16:26:54.849296] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.979 [2024-04-17 16:26:54.849300] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.979 [2024-04-17 16:26:54.849305] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f30f40) on tqpair=0x1ee8300 00:15:20.979 [2024-04-17 16:26:54.849318] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.979 [2024-04-17 16:26:54.849323] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ee8300) 00:15:20.979 [2024-04-17 16:26:54.849331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.979 [2024-04-17 16:26:54.849359] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30f40, cid 4, qid 0 00:15:20.979 [2024-04-17 16:26:54.849435] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:20.979 [2024-04-17 16:26:54.849442] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:20.979 [2024-04-17 16:26:54.849446] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:20.979 [2024-04-17 16:26:54.849450] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ee8300): datao=0, datal=8, cccid=4 00:15:20.979 [2024-04-17 16:26:54.849455] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f30f40) on tqpair(0x1ee8300): expected_datao=0, payload_size=8 00:15:20.979 [2024-04-17 16:26:54.849460] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.979 [2024-04-17 16:26:54.849467] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:20.979 [2024-04-17 16:26:54.849471] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:20.979 [2024-04-17 16:26:54.893821] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.979 [2024-04-17 16:26:54.893865] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.979 [2024-04-17 16:26:54.893871] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.979 [2024-04-17 16:26:54.893883] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f30f40) on tqpair=0x1ee8300 00:15:20.979 ===================================================== 00:15:20.979 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:15:20.979 ===================================================== 00:15:20.979 Controller Capabilities/Features 00:15:20.979 ================================ 00:15:20.979 Vendor ID: 0000 00:15:20.979 Subsystem Vendor ID: 0000 00:15:20.980 Serial Number: .................... 00:15:20.980 Model Number: ........................................ 00:15:20.980 Firmware Version: 24.05 00:15:20.980 Recommended Arb Burst: 0 00:15:20.980 IEEE OUI Identifier: 00 00 00 00:15:20.980 Multi-path I/O 00:15:20.980 May have multiple subsystem ports: No 00:15:20.980 May have multiple controllers: No 00:15:20.980 Associated with SR-IOV VF: No 00:15:20.980 Max Data Transfer Size: 131072 00:15:20.980 Max Number of Namespaces: 0 00:15:20.980 Max Number of I/O Queues: 1024 00:15:20.980 NVMe Specification Version (VS): 1.3 00:15:20.980 NVMe Specification Version (Identify): 1.3 00:15:20.980 Maximum Queue Entries: 128 00:15:20.980 Contiguous Queues Required: Yes 00:15:20.980 Arbitration Mechanisms Supported 00:15:20.980 Weighted Round Robin: Not Supported 00:15:20.980 Vendor Specific: Not Supported 00:15:20.980 Reset Timeout: 15000 ms 00:15:20.980 Doorbell Stride: 4 bytes 00:15:20.980 NVM Subsystem Reset: Not Supported 00:15:20.980 Command Sets Supported 00:15:20.980 NVM Command Set: Supported 00:15:20.980 Boot Partition: Not Supported 00:15:20.980 Memory Page Size Minimum: 4096 bytes 00:15:20.980 Memory Page Size Maximum: 4096 bytes 00:15:20.980 Persistent Memory Region: Not Supported 00:15:20.980 Optional Asynchronous Events Supported 00:15:20.980 Namespace Attribute Notices: Not Supported 00:15:20.980 Firmware Activation Notices: Not Supported 00:15:20.980 ANA Change Notices: Not Supported 00:15:20.980 PLE Aggregate Log Change Notices: Not Supported 00:15:20.980 LBA Status Info Alert Notices: Not Supported 00:15:20.980 EGE Aggregate Log Change Notices: Not Supported 00:15:20.980 Normal NVM Subsystem Shutdown event: Not Supported 00:15:20.980 Zone Descriptor Change Notices: Not Supported 00:15:20.980 Discovery Log Change Notices: Supported 00:15:20.980 Controller Attributes 00:15:20.980 128-bit Host Identifier: Not Supported 00:15:20.980 Non-Operational Permissive Mode: Not Supported 00:15:20.980 NVM Sets: Not Supported 00:15:20.980 Read Recovery Levels: Not Supported 00:15:20.980 Endurance Groups: Not Supported 00:15:20.980 Predictable Latency Mode: Not Supported 00:15:20.980 Traffic Based Keep ALive: Not Supported 00:15:20.980 Namespace Granularity: Not Supported 00:15:20.980 SQ Associations: Not Supported 00:15:20.980 UUID List: Not Supported 00:15:20.980 Multi-Domain Subsystem: Not Supported 00:15:20.980 Fixed Capacity Management: Not Supported 00:15:20.980 Variable Capacity Management: Not Supported 00:15:20.980 Delete Endurance Group: Not Supported 00:15:20.980 Delete NVM Set: Not Supported 00:15:20.980 Extended LBA Formats Supported: Not Supported 00:15:20.980 Flexible Data Placement Supported: Not Supported 00:15:20.980 00:15:20.980 Controller Memory Buffer Support 00:15:20.980 ================================ 00:15:20.980 Supported: No 00:15:20.980 00:15:20.980 Persistent Memory Region Support 00:15:20.980 ================================ 00:15:20.980 Supported: No 00:15:20.980 00:15:20.980 Admin Command Set Attributes 00:15:20.980 ============================ 00:15:20.980 Security Send/Receive: Not Supported 00:15:20.980 Format NVM: Not Supported 00:15:20.980 Firmware Activate/Download: Not Supported 00:15:20.980 Namespace Management: Not Supported 00:15:20.980 Device Self-Test: Not Supported 00:15:20.980 Directives: Not Supported 00:15:20.980 NVMe-MI: Not Supported 00:15:20.980 Virtualization Management: Not Supported 00:15:20.980 Doorbell Buffer Config: Not Supported 00:15:20.980 Get LBA Status Capability: Not Supported 00:15:20.980 Command & Feature Lockdown Capability: Not Supported 00:15:20.980 Abort Command Limit: 1 00:15:20.980 Async Event Request Limit: 4 00:15:20.980 Number of Firmware Slots: N/A 00:15:20.980 Firmware Slot 1 Read-Only: N/A 00:15:20.980 Firmware Activation Without Reset: N/A 00:15:20.980 Multiple Update Detection Support: N/A 00:15:20.980 Firmware Update Granularity: No Information Provided 00:15:20.980 Per-Namespace SMART Log: No 00:15:20.980 Asymmetric Namespace Access Log Page: Not Supported 00:15:20.980 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:15:20.980 Command Effects Log Page: Not Supported 00:15:20.980 Get Log Page Extended Data: Supported 00:15:20.980 Telemetry Log Pages: Not Supported 00:15:20.980 Persistent Event Log Pages: Not Supported 00:15:20.980 Supported Log Pages Log Page: May Support 00:15:20.980 Commands Supported & Effects Log Page: Not Supported 00:15:20.980 Feature Identifiers & Effects Log Page:May Support 00:15:20.980 NVMe-MI Commands & Effects Log Page: May Support 00:15:20.980 Data Area 4 for Telemetry Log: Not Supported 00:15:20.980 Error Log Page Entries Supported: 128 00:15:20.980 Keep Alive: Not Supported 00:15:20.980 00:15:20.980 NVM Command Set Attributes 00:15:20.980 ========================== 00:15:20.980 Submission Queue Entry Size 00:15:20.980 Max: 1 00:15:20.980 Min: 1 00:15:20.980 Completion Queue Entry Size 00:15:20.980 Max: 1 00:15:20.980 Min: 1 00:15:20.980 Number of Namespaces: 0 00:15:20.980 Compare Command: Not Supported 00:15:20.980 Write Uncorrectable Command: Not Supported 00:15:20.980 Dataset Management Command: Not Supported 00:15:20.980 Write Zeroes Command: Not Supported 00:15:20.980 Set Features Save Field: Not Supported 00:15:20.980 Reservations: Not Supported 00:15:20.980 Timestamp: Not Supported 00:15:20.980 Copy: Not Supported 00:15:20.980 Volatile Write Cache: Not Present 00:15:20.980 Atomic Write Unit (Normal): 1 00:15:20.980 Atomic Write Unit (PFail): 1 00:15:20.980 Atomic Compare & Write Unit: 1 00:15:20.980 Fused Compare & Write: Supported 00:15:20.980 Scatter-Gather List 00:15:20.980 SGL Command Set: Supported 00:15:20.980 SGL Keyed: Supported 00:15:20.980 SGL Bit Bucket Descriptor: Not Supported 00:15:20.980 SGL Metadata Pointer: Not Supported 00:15:20.980 Oversized SGL: Not Supported 00:15:20.980 SGL Metadata Address: Not Supported 00:15:20.980 SGL Offset: Supported 00:15:20.980 Transport SGL Data Block: Not Supported 00:15:20.980 Replay Protected Memory Block: Not Supported 00:15:20.980 00:15:20.980 Firmware Slot Information 00:15:20.980 ========================= 00:15:20.980 Active slot: 0 00:15:20.980 00:15:20.980 00:15:20.980 Error Log 00:15:20.980 ========= 00:15:20.980 00:15:20.980 Active Namespaces 00:15:20.980 ================= 00:15:20.980 Discovery Log Page 00:15:20.980 ================== 00:15:20.980 Generation Counter: 2 00:15:20.980 Number of Records: 2 00:15:20.980 Record Format: 0 00:15:20.980 00:15:20.980 Discovery Log Entry 0 00:15:20.980 ---------------------- 00:15:20.980 Transport Type: 3 (TCP) 00:15:20.980 Address Family: 1 (IPv4) 00:15:20.980 Subsystem Type: 3 (Current Discovery Subsystem) 00:15:20.980 Entry Flags: 00:15:20.980 Duplicate Returned Information: 1 00:15:20.980 Explicit Persistent Connection Support for Discovery: 1 00:15:20.980 Transport Requirements: 00:15:20.980 Secure Channel: Not Required 00:15:20.980 Port ID: 0 (0x0000) 00:15:20.980 Controller ID: 65535 (0xffff) 00:15:20.980 Admin Max SQ Size: 128 00:15:20.980 Transport Service Identifier: 4420 00:15:20.980 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:15:20.980 Transport Address: 10.0.0.2 00:15:20.980 Discovery Log Entry 1 00:15:20.980 ---------------------- 00:15:20.980 Transport Type: 3 (TCP) 00:15:20.980 Address Family: 1 (IPv4) 00:15:20.980 Subsystem Type: 2 (NVM Subsystem) 00:15:20.980 Entry Flags: 00:15:20.980 Duplicate Returned Information: 0 00:15:20.980 Explicit Persistent Connection Support for Discovery: 0 00:15:20.980 Transport Requirements: 00:15:20.980 Secure Channel: Not Required 00:15:20.980 Port ID: 0 (0x0000) 00:15:20.980 Controller ID: 65535 (0xffff) 00:15:20.980 Admin Max SQ Size: 128 00:15:20.980 Transport Service Identifier: 4420 00:15:20.980 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:15:20.980 Transport Address: 10.0.0.2 [2024-04-17 16:26:54.894109] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:15:20.980 [2024-04-17 16:26:54.894130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.980 [2024-04-17 16:26:54.894139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.980 [2024-04-17 16:26:54.894145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.981 [2024-04-17 16:26:54.894152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.981 [2024-04-17 16:26:54.894166] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.981 [2024-04-17 16:26:54.894172] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.981 [2024-04-17 16:26:54.894176] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ee8300) 00:15:20.981 [2024-04-17 16:26:54.894189] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.981 [2024-04-17 16:26:54.894219] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30de0, cid 3, qid 0 00:15:20.981 [2024-04-17 16:26:54.894297] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.981 [2024-04-17 16:26:54.894305] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.981 [2024-04-17 16:26:54.894309] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.981 [2024-04-17 16:26:54.894313] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f30de0) on tqpair=0x1ee8300 00:15:20.981 [2024-04-17 16:26:54.894329] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.981 [2024-04-17 16:26:54.894334] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.981 [2024-04-17 16:26:54.894339] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ee8300) 00:15:20.981 [2024-04-17 16:26:54.894347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.981 [2024-04-17 16:26:54.894373] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30de0, cid 3, qid 0 00:15:20.981 [2024-04-17 16:26:54.894458] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.981 [2024-04-17 16:26:54.894465] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.981 [2024-04-17 16:26:54.894469] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.981 [2024-04-17 16:26:54.894473] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f30de0) on tqpair=0x1ee8300 00:15:20.981 [2024-04-17 16:26:54.894480] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:15:20.981 [2024-04-17 16:26:54.894485] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:15:20.981 [2024-04-17 16:26:54.894496] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.981 [2024-04-17 16:26:54.894501] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.981 [2024-04-17 16:26:54.894505] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ee8300) 00:15:20.981 [2024-04-17 16:26:54.894513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.981 [2024-04-17 16:26:54.894539] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30de0, cid 3, qid 0 00:15:20.981 [2024-04-17 16:26:54.894603] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.981 [2024-04-17 16:26:54.894610] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.981 [2024-04-17 16:26:54.894614] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.981 [2024-04-17 16:26:54.894618] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f30de0) on tqpair=0x1ee8300 00:15:20.981 [2024-04-17 16:26:54.894631] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.981 [2024-04-17 16:26:54.894636] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.981 [2024-04-17 16:26:54.894640] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ee8300) 00:15:20.981 [2024-04-17 16:26:54.894648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.981 [2024-04-17 16:26:54.894666] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30de0, cid 3, qid 0 00:15:20.981 [2024-04-17 16:26:54.894726] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.981 [2024-04-17 16:26:54.894733] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.981 [2024-04-17 16:26:54.894737] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.981 [2024-04-17 16:26:54.894741] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f30de0) on tqpair=0x1ee8300 00:15:20.981 [2024-04-17 16:26:54.894752] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.981 [2024-04-17 16:26:54.894757] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.981 [2024-04-17 16:26:54.894761] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ee8300) 00:15:20.981 [2024-04-17 16:26:54.894769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.981 [2024-04-17 16:26:54.894806] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30de0, cid 3, qid 0 00:15:20.981 [2024-04-17 16:26:54.894862] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.981 [2024-04-17 16:26:54.894869] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.981 [2024-04-17 16:26:54.894873] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.981 [2024-04-17 16:26:54.894878] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f30de0) on tqpair=0x1ee8300 00:15:20.981 [2024-04-17 16:26:54.894890] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.981 [2024-04-17 16:26:54.894895] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.981 [2024-04-17 16:26:54.894899] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ee8300) 00:15:20.981 [2024-04-17 16:26:54.894907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.981 [2024-04-17 16:26:54.894926] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30de0, cid 3, qid 0 00:15:20.981 [2024-04-17 16:26:54.894980] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.981 [2024-04-17 16:26:54.894987] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.981 [2024-04-17 16:26:54.894991] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.981 [2024-04-17 16:26:54.894995] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f30de0) on tqpair=0x1ee8300 00:15:20.981 [2024-04-17 16:26:54.895007] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.981 [2024-04-17 16:26:54.895012] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.981 [2024-04-17 16:26:54.895016] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ee8300) 00:15:20.981 [2024-04-17 16:26:54.895024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.981 [2024-04-17 16:26:54.895042] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30de0, cid 3, qid 0 00:15:20.981 [2024-04-17 16:26:54.895098] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.981 [2024-04-17 16:26:54.895105] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.981 [2024-04-17 16:26:54.895109] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.981 [2024-04-17 16:26:54.895113] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f30de0) on tqpair=0x1ee8300 00:15:20.981 [2024-04-17 16:26:54.895125] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.981 [2024-04-17 16:26:54.895130] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.981 [2024-04-17 16:26:54.895134] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ee8300) 00:15:20.981 [2024-04-17 16:26:54.895142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.981 [2024-04-17 16:26:54.895160] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30de0, cid 3, qid 0 00:15:20.981 [2024-04-17 16:26:54.895216] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.981 [2024-04-17 16:26:54.895223] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.981 [2024-04-17 16:26:54.895226] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.981 [2024-04-17 16:26:54.895231] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f30de0) on tqpair=0x1ee8300 00:15:20.981 [2024-04-17 16:26:54.895242] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.981 [2024-04-17 16:26:54.895247] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.981 [2024-04-17 16:26:54.895251] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ee8300) 00:15:20.981 [2024-04-17 16:26:54.895259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.981 [2024-04-17 16:26:54.895277] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30de0, cid 3, qid 0 00:15:20.981 [2024-04-17 16:26:54.895332] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.981 [2024-04-17 16:26:54.895339] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.981 [2024-04-17 16:26:54.895343] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.981 [2024-04-17 16:26:54.895348] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f30de0) on tqpair=0x1ee8300 00:15:20.981 [2024-04-17 16:26:54.895359] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.981 [2024-04-17 16:26:54.895364] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.981 [2024-04-17 16:26:54.895368] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ee8300) 00:15:20.981 [2024-04-17 16:26:54.895376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.981 [2024-04-17 16:26:54.895394] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30de0, cid 3, qid 0 00:15:20.981 [2024-04-17 16:26:54.895451] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.981 [2024-04-17 16:26:54.895458] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.981 [2024-04-17 16:26:54.895461] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.981 [2024-04-17 16:26:54.895466] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f30de0) on tqpair=0x1ee8300 00:15:20.981 [2024-04-17 16:26:54.895477] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.981 [2024-04-17 16:26:54.895482] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.981 [2024-04-17 16:26:54.895487] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ee8300) 00:15:20.981 [2024-04-17 16:26:54.895494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.981 [2024-04-17 16:26:54.895512] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30de0, cid 3, qid 0 00:15:20.981 [2024-04-17 16:26:54.895567] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.981 [2024-04-17 16:26:54.895574] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.981 [2024-04-17 16:26:54.895578] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.981 [2024-04-17 16:26:54.895582] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f30de0) on tqpair=0x1ee8300 00:15:20.981 [2024-04-17 16:26:54.895594] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.981 [2024-04-17 16:26:54.895598] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.982 [2024-04-17 16:26:54.895603] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ee8300) 00:15:20.982 [2024-04-17 16:26:54.895610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.982 [2024-04-17 16:26:54.895628] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30de0, cid 3, qid 0 00:15:20.982 [2024-04-17 16:26:54.895685] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.982 [2024-04-17 16:26:54.895703] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.982 [2024-04-17 16:26:54.895707] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.982 [2024-04-17 16:26:54.895712] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f30de0) on tqpair=0x1ee8300 00:15:20.982 [2024-04-17 16:26:54.895724] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.982 [2024-04-17 16:26:54.895728] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.982 [2024-04-17 16:26:54.895733] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ee8300) 00:15:20.982 [2024-04-17 16:26:54.895740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.982 [2024-04-17 16:26:54.895759] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30de0, cid 3, qid 0 00:15:20.982 [2024-04-17 16:26:54.895828] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.982 [2024-04-17 16:26:54.895837] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.982 [2024-04-17 16:26:54.895841] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.982 [2024-04-17 16:26:54.895846] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f30de0) on tqpair=0x1ee8300 00:15:20.982 [2024-04-17 16:26:54.895858] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.982 [2024-04-17 16:26:54.895863] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.982 [2024-04-17 16:26:54.895867] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ee8300) 00:15:20.982 [2024-04-17 16:26:54.895875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.982 [2024-04-17 16:26:54.895896] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30de0, cid 3, qid 0 00:15:20.982 [2024-04-17 16:26:54.895950] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.982 [2024-04-17 16:26:54.895963] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.982 [2024-04-17 16:26:54.895967] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.982 [2024-04-17 16:26:54.895972] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f30de0) on tqpair=0x1ee8300 00:15:20.982 [2024-04-17 16:26:54.895984] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.982 [2024-04-17 16:26:54.895989] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.982 [2024-04-17 16:26:54.895993] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ee8300) 00:15:20.982 [2024-04-17 16:26:54.896002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.982 [2024-04-17 16:26:54.896021] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30de0, cid 3, qid 0 00:15:20.982 [2024-04-17 16:26:54.896075] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.982 [2024-04-17 16:26:54.896087] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.982 [2024-04-17 16:26:54.896092] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.982 [2024-04-17 16:26:54.896096] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f30de0) on tqpair=0x1ee8300 00:15:20.982 [2024-04-17 16:26:54.896109] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.982 [2024-04-17 16:26:54.896114] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.982 [2024-04-17 16:26:54.896118] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ee8300) 00:15:20.982 [2024-04-17 16:26:54.896126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.982 [2024-04-17 16:26:54.896146] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30de0, cid 3, qid 0 00:15:20.982 [2024-04-17 16:26:54.896201] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.982 [2024-04-17 16:26:54.896208] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.982 [2024-04-17 16:26:54.896212] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.982 [2024-04-17 16:26:54.896216] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f30de0) on tqpair=0x1ee8300 00:15:20.982 [2024-04-17 16:26:54.896228] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.982 [2024-04-17 16:26:54.896232] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.982 [2024-04-17 16:26:54.896237] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ee8300) 00:15:20.982 [2024-04-17 16:26:54.896245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.982 [2024-04-17 16:26:54.896263] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30de0, cid 3, qid 0 00:15:20.982 [2024-04-17 16:26:54.896316] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.982 [2024-04-17 16:26:54.896323] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.982 [2024-04-17 16:26:54.896327] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.982 [2024-04-17 16:26:54.896331] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f30de0) on tqpair=0x1ee8300 00:15:20.982 [2024-04-17 16:26:54.896343] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.982 [2024-04-17 16:26:54.896348] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.982 [2024-04-17 16:26:54.896352] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ee8300) 00:15:20.982 [2024-04-17 16:26:54.896359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.982 [2024-04-17 16:26:54.896378] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30de0, cid 3, qid 0 00:15:20.982 [2024-04-17 16:26:54.896432] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.982 [2024-04-17 16:26:54.896448] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.982 [2024-04-17 16:26:54.896453] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.982 [2024-04-17 16:26:54.896457] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f30de0) on tqpair=0x1ee8300 00:15:20.982 [2024-04-17 16:26:54.896470] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.982 [2024-04-17 16:26:54.896475] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.982 [2024-04-17 16:26:54.896479] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ee8300) 00:15:20.982 [2024-04-17 16:26:54.896487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.982 [2024-04-17 16:26:54.896508] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30de0, cid 3, qid 0 00:15:20.982 [2024-04-17 16:26:54.896563] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.982 [2024-04-17 16:26:54.896570] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.982 [2024-04-17 16:26:54.896573] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.982 [2024-04-17 16:26:54.896578] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f30de0) on tqpair=0x1ee8300 00:15:20.982 [2024-04-17 16:26:54.896589] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.982 [2024-04-17 16:26:54.896594] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.982 [2024-04-17 16:26:54.896598] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ee8300) 00:15:20.982 [2024-04-17 16:26:54.896606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.982 [2024-04-17 16:26:54.896625] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30de0, cid 3, qid 0 00:15:20.982 [2024-04-17 16:26:54.896687] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.982 [2024-04-17 16:26:54.896699] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.982 [2024-04-17 16:26:54.896704] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.982 [2024-04-17 16:26:54.896708] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f30de0) on tqpair=0x1ee8300 00:15:20.982 [2024-04-17 16:26:54.896720] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.982 [2024-04-17 16:26:54.896725] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.982 [2024-04-17 16:26:54.896729] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ee8300) 00:15:20.982 [2024-04-17 16:26:54.896737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.982 [2024-04-17 16:26:54.896757] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30de0, cid 3, qid 0 00:15:20.982 [2024-04-17 16:26:54.896828] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.982 [2024-04-17 16:26:54.896837] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.983 [2024-04-17 16:26:54.896841] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.983 [2024-04-17 16:26:54.896845] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f30de0) on tqpair=0x1ee8300 00:15:20.983 [2024-04-17 16:26:54.896858] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.983 [2024-04-17 16:26:54.896863] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.983 [2024-04-17 16:26:54.896867] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ee8300) 00:15:20.983 [2024-04-17 16:26:54.896875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.983 [2024-04-17 16:26:54.896896] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30de0, cid 3, qid 0 00:15:20.983 [2024-04-17 16:26:54.896953] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.983 [2024-04-17 16:26:54.896960] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.983 [2024-04-17 16:26:54.896963] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.983 [2024-04-17 16:26:54.896968] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f30de0) on tqpair=0x1ee8300 00:15:20.983 [2024-04-17 16:26:54.896979] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.983 [2024-04-17 16:26:54.896984] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.983 [2024-04-17 16:26:54.896988] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ee8300) 00:15:20.983 [2024-04-17 16:26:54.896996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.983 [2024-04-17 16:26:54.897015] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30de0, cid 3, qid 0 00:15:20.983 [2024-04-17 16:26:54.897068] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.983 [2024-04-17 16:26:54.897075] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.983 [2024-04-17 16:26:54.897079] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.983 [2024-04-17 16:26:54.897083] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f30de0) on tqpair=0x1ee8300 00:15:20.983 [2024-04-17 16:26:54.897095] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.983 [2024-04-17 16:26:54.897099] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.983 [2024-04-17 16:26:54.897104] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ee8300) 00:15:20.983 [2024-04-17 16:26:54.897111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.983 [2024-04-17 16:26:54.897129] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30de0, cid 3, qid 0 00:15:20.983 [2024-04-17 16:26:54.897185] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.983 [2024-04-17 16:26:54.897192] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.983 [2024-04-17 16:26:54.897196] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.983 [2024-04-17 16:26:54.897200] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f30de0) on tqpair=0x1ee8300 00:15:20.983 [2024-04-17 16:26:54.897212] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.983 [2024-04-17 16:26:54.897217] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.983 [2024-04-17 16:26:54.897221] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ee8300) 00:15:20.983 [2024-04-17 16:26:54.897229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.983 [2024-04-17 16:26:54.897247] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30de0, cid 3, qid 0 00:15:20.983 [2024-04-17 16:26:54.897300] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.983 [2024-04-17 16:26:54.897307] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.983 [2024-04-17 16:26:54.897311] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.983 [2024-04-17 16:26:54.897315] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f30de0) on tqpair=0x1ee8300 00:15:20.983 [2024-04-17 16:26:54.897327] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.983 [2024-04-17 16:26:54.897332] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.983 [2024-04-17 16:26:54.897336] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ee8300) 00:15:20.983 [2024-04-17 16:26:54.897343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.983 [2024-04-17 16:26:54.897361] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30de0, cid 3, qid 0 00:15:20.983 [2024-04-17 16:26:54.897423] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.983 [2024-04-17 16:26:54.897430] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.983 [2024-04-17 16:26:54.897434] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.983 [2024-04-17 16:26:54.897438] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f30de0) on tqpair=0x1ee8300 00:15:20.983 [2024-04-17 16:26:54.897450] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.983 [2024-04-17 16:26:54.897455] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.983 [2024-04-17 16:26:54.897459] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ee8300) 00:15:20.983 [2024-04-17 16:26:54.897466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.983 [2024-04-17 16:26:54.897485] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30de0, cid 3, qid 0 00:15:20.983 [2024-04-17 16:26:54.897538] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.983 [2024-04-17 16:26:54.897545] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.983 [2024-04-17 16:26:54.897549] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.983 [2024-04-17 16:26:54.897554] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f30de0) on tqpair=0x1ee8300 00:15:20.983 [2024-04-17 16:26:54.897565] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.983 [2024-04-17 16:26:54.897570] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.983 [2024-04-17 16:26:54.897574] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ee8300) 00:15:20.983 [2024-04-17 16:26:54.897582] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.983 [2024-04-17 16:26:54.897600] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30de0, cid 3, qid 0 00:15:20.983 [2024-04-17 16:26:54.897655] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.983 [2024-04-17 16:26:54.897662] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.983 [2024-04-17 16:26:54.897666] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.983 [2024-04-17 16:26:54.897670] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f30de0) on tqpair=0x1ee8300 00:15:20.983 [2024-04-17 16:26:54.897682] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.983 [2024-04-17 16:26:54.897687] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.983 [2024-04-17 16:26:54.897691] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ee8300) 00:15:20.983 [2024-04-17 16:26:54.897699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.983 [2024-04-17 16:26:54.897717] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30de0, cid 3, qid 0 00:15:20.983 [2024-04-17 16:26:54.901792] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.983 [2024-04-17 16:26:54.901814] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.983 [2024-04-17 16:26:54.901819] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.983 [2024-04-17 16:26:54.901824] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f30de0) on tqpair=0x1ee8300 00:15:20.983 [2024-04-17 16:26:54.901841] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:20.983 [2024-04-17 16:26:54.901854] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:20.983 [2024-04-17 16:26:54.901858] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ee8300) 00:15:20.983 [2024-04-17 16:26:54.901868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:20.983 [2024-04-17 16:26:54.901896] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f30de0, cid 3, qid 0 00:15:20.983 [2024-04-17 16:26:54.901966] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:20.983 [2024-04-17 16:26:54.901973] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:20.983 [2024-04-17 16:26:54.901977] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:20.983 [2024-04-17 16:26:54.901981] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f30de0) on tqpair=0x1ee8300 00:15:20.983 [2024-04-17 16:26:54.901991] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:15:20.983 00:15:20.983 16:26:54 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:15:20.983 [2024-04-17 16:26:54.941868] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:15:20.983 [2024-04-17 16:26:54.941924] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80370 ] 00:15:21.246 [2024-04-17 16:26:55.083526] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:15:21.246 [2024-04-17 16:26:55.083613] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:21.246 [2024-04-17 16:26:55.083622] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:21.246 [2024-04-17 16:26:55.083638] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:21.246 [2024-04-17 16:26:55.083651] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:21.246 [2024-04-17 16:26:55.087854] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:15:21.246 [2024-04-17 16:26:55.087938] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x119c300 0 00:15:21.246 [2024-04-17 16:26:55.095800] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:21.246 [2024-04-17 16:26:55.095825] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:21.246 [2024-04-17 16:26:55.095831] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:21.246 [2024-04-17 16:26:55.095835] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:21.246 [2024-04-17 16:26:55.095894] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.246 [2024-04-17 16:26:55.095924] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.246 [2024-04-17 16:26:55.095930] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x119c300) 00:15:21.246 [2024-04-17 16:26:55.095948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:21.246 [2024-04-17 16:26:55.095989] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e49c0, cid 0, qid 0 00:15:21.246 [2024-04-17 16:26:55.102842] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.246 [2024-04-17 16:26:55.102869] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.246 [2024-04-17 16:26:55.102875] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.246 [2024-04-17 16:26:55.102881] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e49c0) on tqpair=0x119c300 00:15:21.246 [2024-04-17 16:26:55.102898] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:21.246 [2024-04-17 16:26:55.102910] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:15:21.246 [2024-04-17 16:26:55.102916] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:15:21.246 [2024-04-17 16:26:55.102941] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.246 [2024-04-17 16:26:55.102948] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.246 [2024-04-17 16:26:55.102952] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x119c300) 00:15:21.246 [2024-04-17 16:26:55.102966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.246 [2024-04-17 16:26:55.102998] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e49c0, cid 0, qid 0 00:15:21.246 [2024-04-17 16:26:55.103077] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.246 [2024-04-17 16:26:55.103085] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.246 [2024-04-17 16:26:55.103089] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.247 [2024-04-17 16:26:55.103093] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e49c0) on tqpair=0x119c300 00:15:21.247 [2024-04-17 16:26:55.103104] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:15:21.247 [2024-04-17 16:26:55.103113] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:15:21.247 [2024-04-17 16:26:55.103122] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.247 [2024-04-17 16:26:55.103126] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.247 [2024-04-17 16:26:55.103130] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x119c300) 00:15:21.247 [2024-04-17 16:26:55.103138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.247 [2024-04-17 16:26:55.103160] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e49c0, cid 0, qid 0 00:15:21.247 [2024-04-17 16:26:55.103223] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.247 [2024-04-17 16:26:55.103230] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.247 [2024-04-17 16:26:55.103234] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.247 [2024-04-17 16:26:55.103238] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e49c0) on tqpair=0x119c300 00:15:21.247 [2024-04-17 16:26:55.103245] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:15:21.247 [2024-04-17 16:26:55.103256] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:15:21.247 [2024-04-17 16:26:55.103264] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.247 [2024-04-17 16:26:55.103269] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.247 [2024-04-17 16:26:55.103273] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x119c300) 00:15:21.247 [2024-04-17 16:26:55.103280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.247 [2024-04-17 16:26:55.103300] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e49c0, cid 0, qid 0 00:15:21.247 [2024-04-17 16:26:55.103367] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.247 [2024-04-17 16:26:55.103374] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.247 [2024-04-17 16:26:55.103378] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.247 [2024-04-17 16:26:55.103382] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e49c0) on tqpair=0x119c300 00:15:21.247 [2024-04-17 16:26:55.103390] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:21.247 [2024-04-17 16:26:55.103401] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.247 [2024-04-17 16:26:55.103406] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.247 [2024-04-17 16:26:55.103410] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x119c300) 00:15:21.247 [2024-04-17 16:26:55.103418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.247 [2024-04-17 16:26:55.103436] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e49c0, cid 0, qid 0 00:15:21.247 [2024-04-17 16:26:55.103496] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.247 [2024-04-17 16:26:55.103503] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.247 [2024-04-17 16:26:55.103507] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.247 [2024-04-17 16:26:55.103511] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e49c0) on tqpair=0x119c300 00:15:21.247 [2024-04-17 16:26:55.103517] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:15:21.247 [2024-04-17 16:26:55.103523] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:15:21.247 [2024-04-17 16:26:55.103531] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:21.247 [2024-04-17 16:26:55.103638] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:15:21.247 [2024-04-17 16:26:55.103643] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:21.247 [2024-04-17 16:26:55.103652] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.247 [2024-04-17 16:26:55.103657] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.247 [2024-04-17 16:26:55.103661] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x119c300) 00:15:21.247 [2024-04-17 16:26:55.103669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.247 [2024-04-17 16:26:55.103688] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e49c0, cid 0, qid 0 00:15:21.247 [2024-04-17 16:26:55.103754] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.247 [2024-04-17 16:26:55.103762] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.247 [2024-04-17 16:26:55.103766] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.247 [2024-04-17 16:26:55.103783] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e49c0) on tqpair=0x119c300 00:15:21.247 [2024-04-17 16:26:55.103792] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:21.247 [2024-04-17 16:26:55.103805] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.247 [2024-04-17 16:26:55.103810] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.247 [2024-04-17 16:26:55.103814] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x119c300) 00:15:21.247 [2024-04-17 16:26:55.103822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.247 [2024-04-17 16:26:55.103844] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e49c0, cid 0, qid 0 00:15:21.247 [2024-04-17 16:26:55.103904] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.247 [2024-04-17 16:26:55.103911] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.247 [2024-04-17 16:26:55.103915] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.247 [2024-04-17 16:26:55.103920] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e49c0) on tqpair=0x119c300 00:15:21.247 [2024-04-17 16:26:55.103926] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:21.247 [2024-04-17 16:26:55.103931] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:15:21.247 [2024-04-17 16:26:55.103940] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:15:21.247 [2024-04-17 16:26:55.103951] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:15:21.247 [2024-04-17 16:26:55.103964] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.247 [2024-04-17 16:26:55.103969] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x119c300) 00:15:21.247 [2024-04-17 16:26:55.103977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.247 [2024-04-17 16:26:55.103998] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e49c0, cid 0, qid 0 00:15:21.247 [2024-04-17 16:26:55.104105] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:21.247 [2024-04-17 16:26:55.104113] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:21.247 [2024-04-17 16:26:55.104117] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:21.247 [2024-04-17 16:26:55.104121] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x119c300): datao=0, datal=4096, cccid=0 00:15:21.247 [2024-04-17 16:26:55.104126] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11e49c0) on tqpair(0x119c300): expected_datao=0, payload_size=4096 00:15:21.247 [2024-04-17 16:26:55.104132] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.247 [2024-04-17 16:26:55.104141] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:21.247 [2024-04-17 16:26:55.104146] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:21.247 [2024-04-17 16:26:55.104155] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.247 [2024-04-17 16:26:55.104162] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.247 [2024-04-17 16:26:55.104165] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.247 [2024-04-17 16:26:55.104169] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e49c0) on tqpair=0x119c300 00:15:21.247 [2024-04-17 16:26:55.104181] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:15:21.247 [2024-04-17 16:26:55.104187] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:15:21.247 [2024-04-17 16:26:55.104192] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:15:21.247 [2024-04-17 16:26:55.104201] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:15:21.247 [2024-04-17 16:26:55.104207] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:15:21.247 [2024-04-17 16:26:55.104212] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:15:21.247 [2024-04-17 16:26:55.104223] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:15:21.247 [2024-04-17 16:26:55.104231] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.247 [2024-04-17 16:26:55.104236] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.247 [2024-04-17 16:26:55.104240] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x119c300) 00:15:21.247 [2024-04-17 16:26:55.104248] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:21.247 [2024-04-17 16:26:55.104270] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e49c0, cid 0, qid 0 00:15:21.247 [2024-04-17 16:26:55.104333] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.247 [2024-04-17 16:26:55.104340] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.247 [2024-04-17 16:26:55.104344] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.247 [2024-04-17 16:26:55.104348] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e49c0) on tqpair=0x119c300 00:15:21.247 [2024-04-17 16:26:55.104358] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.247 [2024-04-17 16:26:55.104362] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.247 [2024-04-17 16:26:55.104366] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x119c300) 00:15:21.247 [2024-04-17 16:26:55.104373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.247 [2024-04-17 16:26:55.104380] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.247 [2024-04-17 16:26:55.104385] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.247 [2024-04-17 16:26:55.104389] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x119c300) 00:15:21.248 [2024-04-17 16:26:55.104395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.248 [2024-04-17 16:26:55.104402] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.248 [2024-04-17 16:26:55.104406] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.248 [2024-04-17 16:26:55.104410] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x119c300) 00:15:21.248 [2024-04-17 16:26:55.104416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.248 [2024-04-17 16:26:55.104423] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.248 [2024-04-17 16:26:55.104427] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.248 [2024-04-17 16:26:55.104431] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x119c300) 00:15:21.248 [2024-04-17 16:26:55.104437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.248 [2024-04-17 16:26:55.104442] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:21.248 [2024-04-17 16:26:55.104456] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:21.248 [2024-04-17 16:26:55.104464] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.248 [2024-04-17 16:26:55.104469] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x119c300) 00:15:21.248 [2024-04-17 16:26:55.104476] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.248 [2024-04-17 16:26:55.104498] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e49c0, cid 0, qid 0 00:15:21.248 [2024-04-17 16:26:55.104506] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e4b20, cid 1, qid 0 00:15:21.248 [2024-04-17 16:26:55.104511] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e4c80, cid 2, qid 0 00:15:21.248 [2024-04-17 16:26:55.104516] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e4de0, cid 3, qid 0 00:15:21.248 [2024-04-17 16:26:55.104520] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e4f40, cid 4, qid 0 00:15:21.248 [2024-04-17 16:26:55.104646] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.248 [2024-04-17 16:26:55.104654] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.248 [2024-04-17 16:26:55.104658] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.248 [2024-04-17 16:26:55.104662] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e4f40) on tqpair=0x119c300 00:15:21.248 [2024-04-17 16:26:55.104669] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:15:21.248 [2024-04-17 16:26:55.104674] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:21.248 [2024-04-17 16:26:55.104684] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:15:21.248 [2024-04-17 16:26:55.104691] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:21.248 [2024-04-17 16:26:55.104698] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.248 [2024-04-17 16:26:55.104703] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.248 [2024-04-17 16:26:55.104706] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x119c300) 00:15:21.248 [2024-04-17 16:26:55.104714] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:21.248 [2024-04-17 16:26:55.104734] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e4f40, cid 4, qid 0 00:15:21.248 [2024-04-17 16:26:55.104818] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.248 [2024-04-17 16:26:55.104827] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.248 [2024-04-17 16:26:55.104831] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.248 [2024-04-17 16:26:55.104835] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e4f40) on tqpair=0x119c300 00:15:21.248 [2024-04-17 16:26:55.104889] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:15:21.248 [2024-04-17 16:26:55.104901] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:21.248 [2024-04-17 16:26:55.104910] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.248 [2024-04-17 16:26:55.104915] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x119c300) 00:15:21.248 [2024-04-17 16:26:55.104923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.248 [2024-04-17 16:26:55.104945] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e4f40, cid 4, qid 0 00:15:21.248 [2024-04-17 16:26:55.105021] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:21.248 [2024-04-17 16:26:55.105028] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:21.248 [2024-04-17 16:26:55.105032] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:21.248 [2024-04-17 16:26:55.105036] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x119c300): datao=0, datal=4096, cccid=4 00:15:21.248 [2024-04-17 16:26:55.105041] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11e4f40) on tqpair(0x119c300): expected_datao=0, payload_size=4096 00:15:21.248 [2024-04-17 16:26:55.105046] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.248 [2024-04-17 16:26:55.105054] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:21.248 [2024-04-17 16:26:55.105058] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:21.248 [2024-04-17 16:26:55.105067] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.248 [2024-04-17 16:26:55.105073] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.248 [2024-04-17 16:26:55.105077] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.248 [2024-04-17 16:26:55.105081] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e4f40) on tqpair=0x119c300 00:15:21.248 [2024-04-17 16:26:55.105094] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:15:21.248 [2024-04-17 16:26:55.105107] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:15:21.248 [2024-04-17 16:26:55.105119] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:15:21.248 [2024-04-17 16:26:55.105127] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.248 [2024-04-17 16:26:55.105132] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x119c300) 00:15:21.248 [2024-04-17 16:26:55.105140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.248 [2024-04-17 16:26:55.105161] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e4f40, cid 4, qid 0 00:15:21.248 [2024-04-17 16:26:55.105255] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:21.248 [2024-04-17 16:26:55.105262] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:21.248 [2024-04-17 16:26:55.105266] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:21.248 [2024-04-17 16:26:55.105270] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x119c300): datao=0, datal=4096, cccid=4 00:15:21.248 [2024-04-17 16:26:55.105274] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11e4f40) on tqpair(0x119c300): expected_datao=0, payload_size=4096 00:15:21.248 [2024-04-17 16:26:55.105279] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.248 [2024-04-17 16:26:55.105287] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:21.248 [2024-04-17 16:26:55.105291] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:21.248 [2024-04-17 16:26:55.105300] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.248 [2024-04-17 16:26:55.105307] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.248 [2024-04-17 16:26:55.105310] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.248 [2024-04-17 16:26:55.105315] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e4f40) on tqpair=0x119c300 00:15:21.248 [2024-04-17 16:26:55.105332] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:21.248 [2024-04-17 16:26:55.105345] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:21.248 [2024-04-17 16:26:55.105354] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.248 [2024-04-17 16:26:55.105358] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x119c300) 00:15:21.248 [2024-04-17 16:26:55.105366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.248 [2024-04-17 16:26:55.105387] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e4f40, cid 4, qid 0 00:15:21.248 [2024-04-17 16:26:55.105469] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:21.248 [2024-04-17 16:26:55.105477] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:21.248 [2024-04-17 16:26:55.105481] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:21.248 [2024-04-17 16:26:55.105485] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x119c300): datao=0, datal=4096, cccid=4 00:15:21.248 [2024-04-17 16:26:55.105490] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11e4f40) on tqpair(0x119c300): expected_datao=0, payload_size=4096 00:15:21.248 [2024-04-17 16:26:55.105495] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.248 [2024-04-17 16:26:55.105502] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:21.248 [2024-04-17 16:26:55.105507] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:21.248 [2024-04-17 16:26:55.105516] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.248 [2024-04-17 16:26:55.105522] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.248 [2024-04-17 16:26:55.105526] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.248 [2024-04-17 16:26:55.105530] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e4f40) on tqpair=0x119c300 00:15:21.248 [2024-04-17 16:26:55.105540] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:21.248 [2024-04-17 16:26:55.105550] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:15:21.248 [2024-04-17 16:26:55.105564] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:15:21.248 [2024-04-17 16:26:55.105571] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:21.248 [2024-04-17 16:26:55.105577] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:15:21.249 [2024-04-17 16:26:55.105583] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:15:21.249 [2024-04-17 16:26:55.105588] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:15:21.249 [2024-04-17 16:26:55.105593] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:15:21.249 [2024-04-17 16:26:55.105614] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.249 [2024-04-17 16:26:55.105619] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x119c300) 00:15:21.249 [2024-04-17 16:26:55.105627] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.249 [2024-04-17 16:26:55.105635] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.249 [2024-04-17 16:26:55.105639] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.249 [2024-04-17 16:26:55.105643] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x119c300) 00:15:21.249 [2024-04-17 16:26:55.105650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.249 [2024-04-17 16:26:55.105677] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e4f40, cid 4, qid 0 00:15:21.249 [2024-04-17 16:26:55.105685] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e50a0, cid 5, qid 0 00:15:21.249 [2024-04-17 16:26:55.105803] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.249 [2024-04-17 16:26:55.105813] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.249 [2024-04-17 16:26:55.105817] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.249 [2024-04-17 16:26:55.105821] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e4f40) on tqpair=0x119c300 00:15:21.249 [2024-04-17 16:26:55.105830] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.249 [2024-04-17 16:26:55.105836] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.249 [2024-04-17 16:26:55.105840] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.249 [2024-04-17 16:26:55.105844] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e50a0) on tqpair=0x119c300 00:15:21.249 [2024-04-17 16:26:55.105856] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.249 [2024-04-17 16:26:55.105861] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x119c300) 00:15:21.249 [2024-04-17 16:26:55.105869] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.249 [2024-04-17 16:26:55.105892] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e50a0, cid 5, qid 0 00:15:21.249 [2024-04-17 16:26:55.105956] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.249 [2024-04-17 16:26:55.105963] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.249 [2024-04-17 16:26:55.105967] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.249 [2024-04-17 16:26:55.105971] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e50a0) on tqpair=0x119c300 00:15:21.249 [2024-04-17 16:26:55.105983] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.249 [2024-04-17 16:26:55.105988] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x119c300) 00:15:21.249 [2024-04-17 16:26:55.105996] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.249 [2024-04-17 16:26:55.106015] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e50a0, cid 5, qid 0 00:15:21.249 [2024-04-17 16:26:55.106073] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.249 [2024-04-17 16:26:55.106080] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.249 [2024-04-17 16:26:55.106084] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.249 [2024-04-17 16:26:55.106088] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e50a0) on tqpair=0x119c300 00:15:21.249 [2024-04-17 16:26:55.106099] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.249 [2024-04-17 16:26:55.106104] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x119c300) 00:15:21.249 [2024-04-17 16:26:55.106112] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.249 [2024-04-17 16:26:55.106131] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e50a0, cid 5, qid 0 00:15:21.249 [2024-04-17 16:26:55.106188] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.249 [2024-04-17 16:26:55.106195] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.249 [2024-04-17 16:26:55.106199] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.249 [2024-04-17 16:26:55.106203] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e50a0) on tqpair=0x119c300 00:15:21.249 [2024-04-17 16:26:55.106219] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.249 [2024-04-17 16:26:55.106224] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x119c300) 00:15:21.249 [2024-04-17 16:26:55.106232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.249 [2024-04-17 16:26:55.106240] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.249 [2024-04-17 16:26:55.106245] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x119c300) 00:15:21.249 [2024-04-17 16:26:55.106251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.249 [2024-04-17 16:26:55.106259] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.249 [2024-04-17 16:26:55.106263] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x119c300) 00:15:21.249 [2024-04-17 16:26:55.106270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.249 [2024-04-17 16:26:55.106279] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.249 [2024-04-17 16:26:55.106283] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x119c300) 00:15:21.249 [2024-04-17 16:26:55.106290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.249 [2024-04-17 16:26:55.106317] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e50a0, cid 5, qid 0 00:15:21.249 [2024-04-17 16:26:55.106324] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e4f40, cid 4, qid 0 00:15:21.249 [2024-04-17 16:26:55.106329] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e5200, cid 6, qid 0 00:15:21.249 [2024-04-17 16:26:55.106334] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e5360, cid 7, qid 0 00:15:21.249 [2024-04-17 16:26:55.106476] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:21.249 [2024-04-17 16:26:55.106484] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:21.249 [2024-04-17 16:26:55.106487] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:21.249 [2024-04-17 16:26:55.106491] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x119c300): datao=0, datal=8192, cccid=5 00:15:21.249 [2024-04-17 16:26:55.106496] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11e50a0) on tqpair(0x119c300): expected_datao=0, payload_size=8192 00:15:21.249 [2024-04-17 16:26:55.106501] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.249 [2024-04-17 16:26:55.106518] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:21.249 [2024-04-17 16:26:55.106523] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:21.249 [2024-04-17 16:26:55.106529] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:21.249 [2024-04-17 16:26:55.106535] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:21.249 [2024-04-17 16:26:55.106539] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:21.249 [2024-04-17 16:26:55.106543] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x119c300): datao=0, datal=512, cccid=4 00:15:21.249 [2024-04-17 16:26:55.106548] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11e4f40) on tqpair(0x119c300): expected_datao=0, payload_size=512 00:15:21.249 [2024-04-17 16:26:55.106552] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.249 [2024-04-17 16:26:55.106559] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:21.249 [2024-04-17 16:26:55.106563] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:21.249 [2024-04-17 16:26:55.106569] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:21.249 [2024-04-17 16:26:55.106575] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:21.249 [2024-04-17 16:26:55.106578] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:21.249 [2024-04-17 16:26:55.106582] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x119c300): datao=0, datal=512, cccid=6 00:15:21.249 [2024-04-17 16:26:55.106587] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11e5200) on tqpair(0x119c300): expected_datao=0, payload_size=512 00:15:21.249 [2024-04-17 16:26:55.106591] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.249 [2024-04-17 16:26:55.106598] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:21.249 [2024-04-17 16:26:55.106602] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:21.249 [2024-04-17 16:26:55.106607] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:21.249 [2024-04-17 16:26:55.106613] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:21.249 [2024-04-17 16:26:55.106617] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:21.249 [2024-04-17 16:26:55.106621] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x119c300): datao=0, datal=4096, cccid=7 00:15:21.249 [2024-04-17 16:26:55.106625] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11e5360) on tqpair(0x119c300): expected_datao=0, payload_size=4096 00:15:21.249 [2024-04-17 16:26:55.106630] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.249 [2024-04-17 16:26:55.106637] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:21.249 [2024-04-17 16:26:55.106641] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:21.249 [2024-04-17 16:26:55.106649] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.249 [2024-04-17 16:26:55.106656] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.249 [2024-04-17 16:26:55.106659] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.249 [2024-04-17 16:26:55.106663] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e50a0) on tqpair=0x119c300 00:15:21.249 [2024-04-17 16:26:55.106682] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.249 [2024-04-17 16:26:55.106689] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.249 [2024-04-17 16:26:55.106693] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.249 [2024-04-17 16:26:55.106697] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e4f40) on tqpair=0x119c300 00:15:21.249 [2024-04-17 16:26:55.106709] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.249 [2024-04-17 16:26:55.106715] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.249 [2024-04-17 16:26:55.106719] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.249 [2024-04-17 16:26:55.106723] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e5200) on tqpair=0x119c300 00:15:21.249 [2024-04-17 16:26:55.106731] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.250 [2024-04-17 16:26:55.106737] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.250 [2024-04-17 16:26:55.106741] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.250 [2024-04-17 16:26:55.106745] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e5360) on tqpair=0x119c300 00:15:21.250 ===================================================== 00:15:21.250 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:21.250 ===================================================== 00:15:21.250 Controller Capabilities/Features 00:15:21.250 ================================ 00:15:21.250 Vendor ID: 8086 00:15:21.250 Subsystem Vendor ID: 8086 00:15:21.250 Serial Number: SPDK00000000000001 00:15:21.250 Model Number: SPDK bdev Controller 00:15:21.250 Firmware Version: 24.05 00:15:21.250 Recommended Arb Burst: 6 00:15:21.250 IEEE OUI Identifier: e4 d2 5c 00:15:21.250 Multi-path I/O 00:15:21.250 May have multiple subsystem ports: Yes 00:15:21.250 May have multiple controllers: Yes 00:15:21.250 Associated with SR-IOV VF: No 00:15:21.250 Max Data Transfer Size: 131072 00:15:21.250 Max Number of Namespaces: 32 00:15:21.250 Max Number of I/O Queues: 127 00:15:21.250 NVMe Specification Version (VS): 1.3 00:15:21.250 NVMe Specification Version (Identify): 1.3 00:15:21.250 Maximum Queue Entries: 128 00:15:21.250 Contiguous Queues Required: Yes 00:15:21.250 Arbitration Mechanisms Supported 00:15:21.250 Weighted Round Robin: Not Supported 00:15:21.250 Vendor Specific: Not Supported 00:15:21.250 Reset Timeout: 15000 ms 00:15:21.250 Doorbell Stride: 4 bytes 00:15:21.250 NVM Subsystem Reset: Not Supported 00:15:21.250 Command Sets Supported 00:15:21.250 NVM Command Set: Supported 00:15:21.250 Boot Partition: Not Supported 00:15:21.250 Memory Page Size Minimum: 4096 bytes 00:15:21.250 Memory Page Size Maximum: 4096 bytes 00:15:21.250 Persistent Memory Region: Not Supported 00:15:21.250 Optional Asynchronous Events Supported 00:15:21.250 Namespace Attribute Notices: Supported 00:15:21.250 Firmware Activation Notices: Not Supported 00:15:21.250 ANA Change Notices: Not Supported 00:15:21.250 PLE Aggregate Log Change Notices: Not Supported 00:15:21.250 LBA Status Info Alert Notices: Not Supported 00:15:21.250 EGE Aggregate Log Change Notices: Not Supported 00:15:21.250 Normal NVM Subsystem Shutdown event: Not Supported 00:15:21.250 Zone Descriptor Change Notices: Not Supported 00:15:21.250 Discovery Log Change Notices: Not Supported 00:15:21.250 Controller Attributes 00:15:21.250 128-bit Host Identifier: Supported 00:15:21.250 Non-Operational Permissive Mode: Not Supported 00:15:21.250 NVM Sets: Not Supported 00:15:21.250 Read Recovery Levels: Not Supported 00:15:21.250 Endurance Groups: Not Supported 00:15:21.250 Predictable Latency Mode: Not Supported 00:15:21.250 Traffic Based Keep ALive: Not Supported 00:15:21.250 Namespace Granularity: Not Supported 00:15:21.250 SQ Associations: Not Supported 00:15:21.250 UUID List: Not Supported 00:15:21.250 Multi-Domain Subsystem: Not Supported 00:15:21.250 Fixed Capacity Management: Not Supported 00:15:21.250 Variable Capacity Management: Not Supported 00:15:21.250 Delete Endurance Group: Not Supported 00:15:21.250 Delete NVM Set: Not Supported 00:15:21.250 Extended LBA Formats Supported: Not Supported 00:15:21.250 Flexible Data Placement Supported: Not Supported 00:15:21.250 00:15:21.250 Controller Memory Buffer Support 00:15:21.250 ================================ 00:15:21.250 Supported: No 00:15:21.250 00:15:21.250 Persistent Memory Region Support 00:15:21.250 ================================ 00:15:21.250 Supported: No 00:15:21.250 00:15:21.250 Admin Command Set Attributes 00:15:21.250 ============================ 00:15:21.250 Security Send/Receive: Not Supported 00:15:21.250 Format NVM: Not Supported 00:15:21.250 Firmware Activate/Download: Not Supported 00:15:21.250 Namespace Management: Not Supported 00:15:21.250 Device Self-Test: Not Supported 00:15:21.250 Directives: Not Supported 00:15:21.250 NVMe-MI: Not Supported 00:15:21.250 Virtualization Management: Not Supported 00:15:21.250 Doorbell Buffer Config: Not Supported 00:15:21.250 Get LBA Status Capability: Not Supported 00:15:21.250 Command & Feature Lockdown Capability: Not Supported 00:15:21.250 Abort Command Limit: 4 00:15:21.250 Async Event Request Limit: 4 00:15:21.250 Number of Firmware Slots: N/A 00:15:21.250 Firmware Slot 1 Read-Only: N/A 00:15:21.250 Firmware Activation Without Reset: N/A 00:15:21.250 Multiple Update Detection Support: N/A 00:15:21.250 Firmware Update Granularity: No Information Provided 00:15:21.250 Per-Namespace SMART Log: No 00:15:21.250 Asymmetric Namespace Access Log Page: Not Supported 00:15:21.250 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:15:21.250 Command Effects Log Page: Supported 00:15:21.250 Get Log Page Extended Data: Supported 00:15:21.250 Telemetry Log Pages: Not Supported 00:15:21.250 Persistent Event Log Pages: Not Supported 00:15:21.250 Supported Log Pages Log Page: May Support 00:15:21.250 Commands Supported & Effects Log Page: Not Supported 00:15:21.250 Feature Identifiers & Effects Log Page:May Support 00:15:21.250 NVMe-MI Commands & Effects Log Page: May Support 00:15:21.250 Data Area 4 for Telemetry Log: Not Supported 00:15:21.250 Error Log Page Entries Supported: 128 00:15:21.250 Keep Alive: Supported 00:15:21.250 Keep Alive Granularity: 10000 ms 00:15:21.250 00:15:21.250 NVM Command Set Attributes 00:15:21.250 ========================== 00:15:21.250 Submission Queue Entry Size 00:15:21.250 Max: 64 00:15:21.250 Min: 64 00:15:21.250 Completion Queue Entry Size 00:15:21.250 Max: 16 00:15:21.250 Min: 16 00:15:21.250 Number of Namespaces: 32 00:15:21.250 Compare Command: Supported 00:15:21.250 Write Uncorrectable Command: Not Supported 00:15:21.250 Dataset Management Command: Supported 00:15:21.250 Write Zeroes Command: Supported 00:15:21.250 Set Features Save Field: Not Supported 00:15:21.250 Reservations: Supported 00:15:21.250 Timestamp: Not Supported 00:15:21.250 Copy: Supported 00:15:21.250 Volatile Write Cache: Present 00:15:21.250 Atomic Write Unit (Normal): 1 00:15:21.250 Atomic Write Unit (PFail): 1 00:15:21.250 Atomic Compare & Write Unit: 1 00:15:21.250 Fused Compare & Write: Supported 00:15:21.250 Scatter-Gather List 00:15:21.250 SGL Command Set: Supported 00:15:21.250 SGL Keyed: Supported 00:15:21.250 SGL Bit Bucket Descriptor: Not Supported 00:15:21.250 SGL Metadata Pointer: Not Supported 00:15:21.250 Oversized SGL: Not Supported 00:15:21.250 SGL Metadata Address: Not Supported 00:15:21.250 SGL Offset: Supported 00:15:21.250 Transport SGL Data Block: Not Supported 00:15:21.250 Replay Protected Memory Block: Not Supported 00:15:21.250 00:15:21.250 Firmware Slot Information 00:15:21.250 ========================= 00:15:21.250 Active slot: 1 00:15:21.250 Slot 1 Firmware Revision: 24.05 00:15:21.250 00:15:21.250 00:15:21.250 Commands Supported and Effects 00:15:21.250 ============================== 00:15:21.250 Admin Commands 00:15:21.250 -------------- 00:15:21.250 Get Log Page (02h): Supported 00:15:21.250 Identify (06h): Supported 00:15:21.250 Abort (08h): Supported 00:15:21.250 Set Features (09h): Supported 00:15:21.250 Get Features (0Ah): Supported 00:15:21.250 Asynchronous Event Request (0Ch): Supported 00:15:21.250 Keep Alive (18h): Supported 00:15:21.250 I/O Commands 00:15:21.250 ------------ 00:15:21.250 Flush (00h): Supported LBA-Change 00:15:21.250 Write (01h): Supported LBA-Change 00:15:21.250 Read (02h): Supported 00:15:21.250 Compare (05h): Supported 00:15:21.250 Write Zeroes (08h): Supported LBA-Change 00:15:21.250 Dataset Management (09h): Supported LBA-Change 00:15:21.250 Copy (19h): Supported LBA-Change 00:15:21.250 Unknown (79h): Supported LBA-Change 00:15:21.250 Unknown (7Ah): Supported 00:15:21.250 00:15:21.250 Error Log 00:15:21.250 ========= 00:15:21.250 00:15:21.250 Arbitration 00:15:21.250 =========== 00:15:21.250 Arbitration Burst: 1 00:15:21.250 00:15:21.250 Power Management 00:15:21.250 ================ 00:15:21.250 Number of Power States: 1 00:15:21.250 Current Power State: Power State #0 00:15:21.250 Power State #0: 00:15:21.250 Max Power: 0.00 W 00:15:21.250 Non-Operational State: Operational 00:15:21.250 Entry Latency: Not Reported 00:15:21.250 Exit Latency: Not Reported 00:15:21.250 Relative Read Throughput: 0 00:15:21.250 Relative Read Latency: 0 00:15:21.250 Relative Write Throughput: 0 00:15:21.250 Relative Write Latency: 0 00:15:21.250 Idle Power: Not Reported 00:15:21.250 Active Power: Not Reported 00:15:21.250 Non-Operational Permissive Mode: Not Supported 00:15:21.250 00:15:21.250 Health Information 00:15:21.250 ================== 00:15:21.250 Critical Warnings: 00:15:21.250 Available Spare Space: OK 00:15:21.250 Temperature: OK 00:15:21.251 Device Reliability: OK 00:15:21.251 Read Only: No 00:15:21.251 Volatile Memory Backup: OK 00:15:21.251 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:21.251 Temperature Threshold: [2024-04-17 16:26:55.110902] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.251 [2024-04-17 16:26:55.110915] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x119c300) 00:15:21.251 [2024-04-17 16:26:55.110927] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.251 [2024-04-17 16:26:55.110960] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e5360, cid 7, qid 0 00:15:21.251 [2024-04-17 16:26:55.111075] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.251 [2024-04-17 16:26:55.111084] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.251 [2024-04-17 16:26:55.111088] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.251 [2024-04-17 16:26:55.111092] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e5360) on tqpair=0x119c300 00:15:21.251 [2024-04-17 16:26:55.111137] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:15:21.251 [2024-04-17 16:26:55.111153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.251 [2024-04-17 16:26:55.111160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.251 [2024-04-17 16:26:55.111167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.251 [2024-04-17 16:26:55.111174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.251 [2024-04-17 16:26:55.111184] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.251 [2024-04-17 16:26:55.111189] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.251 [2024-04-17 16:26:55.111193] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x119c300) 00:15:21.251 [2024-04-17 16:26:55.111201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.251 [2024-04-17 16:26:55.111226] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e4de0, cid 3, qid 0 00:15:21.251 [2024-04-17 16:26:55.111291] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.251 [2024-04-17 16:26:55.111298] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.251 [2024-04-17 16:26:55.111302] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.251 [2024-04-17 16:26:55.111307] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e4de0) on tqpair=0x119c300 00:15:21.251 [2024-04-17 16:26:55.111316] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.251 [2024-04-17 16:26:55.111321] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.251 [2024-04-17 16:26:55.111325] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x119c300) 00:15:21.251 [2024-04-17 16:26:55.111332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.251 [2024-04-17 16:26:55.111356] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e4de0, cid 3, qid 0 00:15:21.251 [2024-04-17 16:26:55.111446] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.251 [2024-04-17 16:26:55.111453] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.251 [2024-04-17 16:26:55.111457] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.251 [2024-04-17 16:26:55.111461] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e4de0) on tqpair=0x119c300 00:15:21.251 [2024-04-17 16:26:55.111468] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:15:21.251 [2024-04-17 16:26:55.111473] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:15:21.251 [2024-04-17 16:26:55.111483] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.251 [2024-04-17 16:26:55.111488] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.251 [2024-04-17 16:26:55.111492] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x119c300) 00:15:21.251 [2024-04-17 16:26:55.111499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.251 [2024-04-17 16:26:55.111518] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e4de0, cid 3, qid 0 00:15:21.251 [2024-04-17 16:26:55.111589] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.251 [2024-04-17 16:26:55.111596] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.251 [2024-04-17 16:26:55.111600] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.251 [2024-04-17 16:26:55.111604] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e4de0) on tqpair=0x119c300 00:15:21.251 [2024-04-17 16:26:55.111617] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.251 [2024-04-17 16:26:55.111622] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.251 [2024-04-17 16:26:55.111626] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x119c300) 00:15:21.251 [2024-04-17 16:26:55.111633] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.251 [2024-04-17 16:26:55.111652] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e4de0, cid 3, qid 0 00:15:21.251 [2024-04-17 16:26:55.111711] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.251 [2024-04-17 16:26:55.111718] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.251 [2024-04-17 16:26:55.111722] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.251 [2024-04-17 16:26:55.111726] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e4de0) on tqpair=0x119c300 00:15:21.251 [2024-04-17 16:26:55.111737] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.251 [2024-04-17 16:26:55.111742] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.251 [2024-04-17 16:26:55.111746] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x119c300) 00:15:21.251 [2024-04-17 16:26:55.111754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.251 [2024-04-17 16:26:55.111786] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e4de0, cid 3, qid 0 00:15:21.251 [2024-04-17 16:26:55.111847] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.251 [2024-04-17 16:26:55.111854] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.251 [2024-04-17 16:26:55.111858] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.251 [2024-04-17 16:26:55.111862] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e4de0) on tqpair=0x119c300 00:15:21.251 [2024-04-17 16:26:55.111881] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.251 [2024-04-17 16:26:55.111886] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.251 [2024-04-17 16:26:55.111890] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x119c300) 00:15:21.251 [2024-04-17 16:26:55.111898] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.251 [2024-04-17 16:26:55.111920] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e4de0, cid 3, qid 0 00:15:21.251 [2024-04-17 16:26:55.111977] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.251 [2024-04-17 16:26:55.111984] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.251 [2024-04-17 16:26:55.111987] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.251 [2024-04-17 16:26:55.111991] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e4de0) on tqpair=0x119c300 00:15:21.251 [2024-04-17 16:26:55.112003] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.251 [2024-04-17 16:26:55.112008] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.251 [2024-04-17 16:26:55.112012] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x119c300) 00:15:21.251 [2024-04-17 16:26:55.112020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.251 [2024-04-17 16:26:55.112039] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e4de0, cid 3, qid 0 00:15:21.251 [2024-04-17 16:26:55.112096] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.251 [2024-04-17 16:26:55.112103] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.251 [2024-04-17 16:26:55.112107] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.251 [2024-04-17 16:26:55.112111] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e4de0) on tqpair=0x119c300 00:15:21.251 [2024-04-17 16:26:55.112122] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.251 [2024-04-17 16:26:55.112127] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.251 [2024-04-17 16:26:55.112131] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x119c300) 00:15:21.252 [2024-04-17 16:26:55.112139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.252 [2024-04-17 16:26:55.112158] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e4de0, cid 3, qid 0 00:15:21.252 [2024-04-17 16:26:55.112215] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.252 [2024-04-17 16:26:55.112223] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.252 [2024-04-17 16:26:55.112226] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.252 [2024-04-17 16:26:55.112231] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e4de0) on tqpair=0x119c300 00:15:21.252 [2024-04-17 16:26:55.112242] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.252 [2024-04-17 16:26:55.112247] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.252 [2024-04-17 16:26:55.112251] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x119c300) 00:15:21.252 [2024-04-17 16:26:55.112259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.252 [2024-04-17 16:26:55.112278] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e4de0, cid 3, qid 0 00:15:21.252 [2024-04-17 16:26:55.112340] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.252 [2024-04-17 16:26:55.112347] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.252 [2024-04-17 16:26:55.112350] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.252 [2024-04-17 16:26:55.112355] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e4de0) on tqpair=0x119c300 00:15:21.252 [2024-04-17 16:26:55.112366] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.252 [2024-04-17 16:26:55.112371] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.252 [2024-04-17 16:26:55.112375] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x119c300) 00:15:21.252 [2024-04-17 16:26:55.112387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.252 [2024-04-17 16:26:55.112407] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e4de0, cid 3, qid 0 00:15:21.252 [2024-04-17 16:26:55.112463] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.252 [2024-04-17 16:26:55.112470] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.252 [2024-04-17 16:26:55.112474] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.252 [2024-04-17 16:26:55.112478] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e4de0) on tqpair=0x119c300 00:15:21.252 [2024-04-17 16:26:55.112489] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.252 [2024-04-17 16:26:55.112494] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.252 [2024-04-17 16:26:55.112498] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x119c300) 00:15:21.252 [2024-04-17 16:26:55.112506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.252 [2024-04-17 16:26:55.112525] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e4de0, cid 3, qid 0 00:15:21.252 [2024-04-17 16:26:55.112588] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.252 [2024-04-17 16:26:55.112595] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.252 [2024-04-17 16:26:55.112599] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.252 [2024-04-17 16:26:55.112603] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e4de0) on tqpair=0x119c300 00:15:21.252 [2024-04-17 16:26:55.112615] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.252 [2024-04-17 16:26:55.112620] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.252 [2024-04-17 16:26:55.112624] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x119c300) 00:15:21.252 [2024-04-17 16:26:55.112632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.252 [2024-04-17 16:26:55.112651] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e4de0, cid 3, qid 0 00:15:21.252 [2024-04-17 16:26:55.112710] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.252 [2024-04-17 16:26:55.112717] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.252 [2024-04-17 16:26:55.112721] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.252 [2024-04-17 16:26:55.112725] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e4de0) on tqpair=0x119c300 00:15:21.252 [2024-04-17 16:26:55.112737] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.252 [2024-04-17 16:26:55.112742] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.252 [2024-04-17 16:26:55.112745] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x119c300) 00:15:21.252 [2024-04-17 16:26:55.112753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.252 [2024-04-17 16:26:55.112783] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e4de0, cid 3, qid 0 00:15:21.252 [2024-04-17 16:26:55.112843] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.252 [2024-04-17 16:26:55.112851] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.252 [2024-04-17 16:26:55.112855] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.252 [2024-04-17 16:26:55.112859] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e4de0) on tqpair=0x119c300 00:15:21.252 [2024-04-17 16:26:55.112871] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.252 [2024-04-17 16:26:55.112876] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.252 [2024-04-17 16:26:55.112880] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x119c300) 00:15:21.252 [2024-04-17 16:26:55.112888] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.252 [2024-04-17 16:26:55.112909] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e4de0, cid 3, qid 0 00:15:21.252 [2024-04-17 16:26:55.112965] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.252 [2024-04-17 16:26:55.112972] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.252 [2024-04-17 16:26:55.112976] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.252 [2024-04-17 16:26:55.112980] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e4de0) on tqpair=0x119c300 00:15:21.252 [2024-04-17 16:26:55.112991] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.252 [2024-04-17 16:26:55.112996] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.252 [2024-04-17 16:26:55.113000] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x119c300) 00:15:21.252 [2024-04-17 16:26:55.113008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.252 [2024-04-17 16:26:55.113027] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e4de0, cid 3, qid 0 00:15:21.252 [2024-04-17 16:26:55.113081] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.252 [2024-04-17 16:26:55.113089] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.252 [2024-04-17 16:26:55.113092] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.252 [2024-04-17 16:26:55.113096] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e4de0) on tqpair=0x119c300 00:15:21.252 [2024-04-17 16:26:55.113108] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.252 [2024-04-17 16:26:55.113113] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.252 [2024-04-17 16:26:55.113117] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x119c300) 00:15:21.252 [2024-04-17 16:26:55.113124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.252 [2024-04-17 16:26:55.113144] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e4de0, cid 3, qid 0 00:15:21.252 [2024-04-17 16:26:55.113203] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.252 [2024-04-17 16:26:55.113221] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.252 [2024-04-17 16:26:55.113226] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.252 [2024-04-17 16:26:55.113230] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e4de0) on tqpair=0x119c300 00:15:21.252 [2024-04-17 16:26:55.113243] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.252 [2024-04-17 16:26:55.113249] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.252 [2024-04-17 16:26:55.113253] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x119c300) 00:15:21.252 [2024-04-17 16:26:55.113260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.252 [2024-04-17 16:26:55.113282] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e4de0, cid 3, qid 0 00:15:21.252 [2024-04-17 16:26:55.113337] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.252 [2024-04-17 16:26:55.113345] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.252 [2024-04-17 16:26:55.113349] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.252 [2024-04-17 16:26:55.113353] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e4de0) on tqpair=0x119c300 00:15:21.252 [2024-04-17 16:26:55.113365] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.252 [2024-04-17 16:26:55.113370] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.252 [2024-04-17 16:26:55.113374] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x119c300) 00:15:21.252 [2024-04-17 16:26:55.113381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.252 [2024-04-17 16:26:55.113400] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e4de0, cid 3, qid 0 00:15:21.252 [2024-04-17 16:26:55.113458] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.252 [2024-04-17 16:26:55.113473] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.252 [2024-04-17 16:26:55.113478] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.252 [2024-04-17 16:26:55.113483] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e4de0) on tqpair=0x119c300 00:15:21.252 [2024-04-17 16:26:55.113495] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.252 [2024-04-17 16:26:55.113500] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.252 [2024-04-17 16:26:55.113504] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x119c300) 00:15:21.252 [2024-04-17 16:26:55.113512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.252 [2024-04-17 16:26:55.113533] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e4de0, cid 3, qid 0 00:15:21.252 [2024-04-17 16:26:55.113602] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.252 [2024-04-17 16:26:55.113617] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.252 [2024-04-17 16:26:55.113622] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.252 [2024-04-17 16:26:55.113626] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e4de0) on tqpair=0x119c300 00:15:21.252 [2024-04-17 16:26:55.113639] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.252 [2024-04-17 16:26:55.113644] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.252 [2024-04-17 16:26:55.113648] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x119c300) 00:15:21.253 [2024-04-17 16:26:55.113656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.253 [2024-04-17 16:26:55.113677] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e4de0, cid 3, qid 0 00:15:21.253 [2024-04-17 16:26:55.113733] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.253 [2024-04-17 16:26:55.113741] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.253 [2024-04-17 16:26:55.113744] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.253 [2024-04-17 16:26:55.113760] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e4de0) on tqpair=0x119c300 00:15:21.253 [2024-04-17 16:26:55.113784] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.253 [2024-04-17 16:26:55.113791] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.253 [2024-04-17 16:26:55.113795] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x119c300) 00:15:21.253 [2024-04-17 16:26:55.113803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.253 [2024-04-17 16:26:55.113825] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e4de0, cid 3, qid 0 00:15:21.253 [2024-04-17 16:26:55.113888] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.253 [2024-04-17 16:26:55.113895] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.253 [2024-04-17 16:26:55.113899] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.253 [2024-04-17 16:26:55.113903] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e4de0) on tqpair=0x119c300 00:15:21.253 [2024-04-17 16:26:55.113915] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.253 [2024-04-17 16:26:55.113920] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.253 [2024-04-17 16:26:55.113924] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x119c300) 00:15:21.253 [2024-04-17 16:26:55.113932] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.253 [2024-04-17 16:26:55.113952] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e4de0, cid 3, qid 0 00:15:21.253 [2024-04-17 16:26:55.114007] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.253 [2024-04-17 16:26:55.114015] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.253 [2024-04-17 16:26:55.114019] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.253 [2024-04-17 16:26:55.114023] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e4de0) on tqpair=0x119c300 00:15:21.253 [2024-04-17 16:26:55.114035] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.253 [2024-04-17 16:26:55.114040] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.253 [2024-04-17 16:26:55.114044] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x119c300) 00:15:21.253 [2024-04-17 16:26:55.114052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.253 [2024-04-17 16:26:55.114071] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e4de0, cid 3, qid 0 00:15:21.253 [2024-04-17 16:26:55.114134] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.253 [2024-04-17 16:26:55.114146] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.253 [2024-04-17 16:26:55.114150] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.253 [2024-04-17 16:26:55.114154] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e4de0) on tqpair=0x119c300 00:15:21.253 [2024-04-17 16:26:55.114167] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.253 [2024-04-17 16:26:55.114172] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.253 [2024-04-17 16:26:55.114176] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x119c300) 00:15:21.253 [2024-04-17 16:26:55.114183] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.253 [2024-04-17 16:26:55.114203] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e4de0, cid 3, qid 0 00:15:21.253 [2024-04-17 16:26:55.114263] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.253 [2024-04-17 16:26:55.114270] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.253 [2024-04-17 16:26:55.114274] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.253 [2024-04-17 16:26:55.114278] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e4de0) on tqpair=0x119c300 00:15:21.253 [2024-04-17 16:26:55.114289] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.253 [2024-04-17 16:26:55.114294] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.253 [2024-04-17 16:26:55.114298] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x119c300) 00:15:21.253 [2024-04-17 16:26:55.114306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.253 [2024-04-17 16:26:55.114325] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e4de0, cid 3, qid 0 00:15:21.253 [2024-04-17 16:26:55.114384] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.253 [2024-04-17 16:26:55.114391] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.253 [2024-04-17 16:26:55.114395] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.253 [2024-04-17 16:26:55.114399] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e4de0) on tqpair=0x119c300 00:15:21.253 [2024-04-17 16:26:55.114411] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.253 [2024-04-17 16:26:55.114416] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.253 [2024-04-17 16:26:55.114420] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x119c300) 00:15:21.253 [2024-04-17 16:26:55.114427] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.253 [2024-04-17 16:26:55.114446] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e4de0, cid 3, qid 0 00:15:21.253 [2024-04-17 16:26:55.114507] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.253 [2024-04-17 16:26:55.114514] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.253 [2024-04-17 16:26:55.114518] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.253 [2024-04-17 16:26:55.114522] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e4de0) on tqpair=0x119c300 00:15:21.253 [2024-04-17 16:26:55.114533] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.253 [2024-04-17 16:26:55.114538] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.253 [2024-04-17 16:26:55.114542] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x119c300) 00:15:21.253 [2024-04-17 16:26:55.114550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.253 [2024-04-17 16:26:55.114569] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e4de0, cid 3, qid 0 00:15:21.253 [2024-04-17 16:26:55.114630] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.253 [2024-04-17 16:26:55.114637] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.253 [2024-04-17 16:26:55.114641] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.253 [2024-04-17 16:26:55.114645] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e4de0) on tqpair=0x119c300 00:15:21.253 [2024-04-17 16:26:55.114656] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.253 [2024-04-17 16:26:55.114662] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.253 [2024-04-17 16:26:55.114665] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x119c300) 00:15:21.253 [2024-04-17 16:26:55.114673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.253 [2024-04-17 16:26:55.114692] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e4de0, cid 3, qid 0 00:15:21.253 [2024-04-17 16:26:55.114751] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.253 [2024-04-17 16:26:55.114763] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.253 [2024-04-17 16:26:55.114768] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.253 [2024-04-17 16:26:55.118798] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e4de0) on tqpair=0x119c300 00:15:21.253 [2024-04-17 16:26:55.118825] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:21.253 [2024-04-17 16:26:55.118831] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:21.253 [2024-04-17 16:26:55.118835] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x119c300) 00:15:21.253 [2024-04-17 16:26:55.118845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:21.253 [2024-04-17 16:26:55.118873] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11e4de0, cid 3, qid 0 00:15:21.253 [2024-04-17 16:26:55.118949] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:21.253 [2024-04-17 16:26:55.118958] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:21.253 [2024-04-17 16:26:55.118961] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:21.253 [2024-04-17 16:26:55.118966] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11e4de0) on tqpair=0x119c300 00:15:21.253 [2024-04-17 16:26:55.118975] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:15:21.253 0 Kelvin (-273 Celsius) 00:15:21.253 Available Spare: 0% 00:15:21.253 Available Spare Threshold: 0% 00:15:21.253 Life Percentage Used: 0% 00:15:21.253 Data Units Read: 0 00:15:21.253 Data Units Written: 0 00:15:21.253 Host Read Commands: 0 00:15:21.253 Host Write Commands: 0 00:15:21.253 Controller Busy Time: 0 minutes 00:15:21.253 Power Cycles: 0 00:15:21.253 Power On Hours: 0 hours 00:15:21.253 Unsafe Shutdowns: 0 00:15:21.253 Unrecoverable Media Errors: 0 00:15:21.253 Lifetime Error Log Entries: 0 00:15:21.253 Warning Temperature Time: 0 minutes 00:15:21.253 Critical Temperature Time: 0 minutes 00:15:21.253 00:15:21.253 Number of Queues 00:15:21.253 ================ 00:15:21.253 Number of I/O Submission Queues: 127 00:15:21.253 Number of I/O Completion Queues: 127 00:15:21.253 00:15:21.253 Active Namespaces 00:15:21.253 ================= 00:15:21.253 Namespace ID:1 00:15:21.253 Error Recovery Timeout: Unlimited 00:15:21.253 Command Set Identifier: NVM (00h) 00:15:21.253 Deallocate: Supported 00:15:21.253 Deallocated/Unwritten Error: Not Supported 00:15:21.253 Deallocated Read Value: Unknown 00:15:21.253 Deallocate in Write Zeroes: Not Supported 00:15:21.253 Deallocated Guard Field: 0xFFFF 00:15:21.253 Flush: Supported 00:15:21.253 Reservation: Supported 00:15:21.253 Namespace Sharing Capabilities: Multiple Controllers 00:15:21.253 Size (in LBAs): 131072 (0GiB) 00:15:21.253 Capacity (in LBAs): 131072 (0GiB) 00:15:21.253 Utilization (in LBAs): 131072 (0GiB) 00:15:21.253 NGUID: ABCDEF0123456789ABCDEF0123456789 00:15:21.253 EUI64: ABCDEF0123456789 00:15:21.254 UUID: 92372d18-a592-4470-bd88-eb48574144c2 00:15:21.254 Thin Provisioning: Not Supported 00:15:21.254 Per-NS Atomic Units: Yes 00:15:21.254 Atomic Boundary Size (Normal): 0 00:15:21.254 Atomic Boundary Size (PFail): 0 00:15:21.254 Atomic Boundary Offset: 0 00:15:21.254 Maximum Single Source Range Length: 65535 00:15:21.254 Maximum Copy Length: 65535 00:15:21.254 Maximum Source Range Count: 1 00:15:21.254 NGUID/EUI64 Never Reused: No 00:15:21.254 Namespace Write Protected: No 00:15:21.254 Number of LBA Formats: 1 00:15:21.254 Current LBA Format: LBA Format #00 00:15:21.254 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:21.254 00:15:21.254 16:26:55 -- host/identify.sh@51 -- # sync 00:15:21.254 16:26:55 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:21.254 16:26:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:21.254 16:26:55 -- common/autotest_common.sh@10 -- # set +x 00:15:21.254 16:26:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:21.254 16:26:55 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:15:21.254 16:26:55 -- host/identify.sh@56 -- # nvmftestfini 00:15:21.254 16:26:55 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:21.254 16:26:55 -- nvmf/common.sh@117 -- # sync 00:15:21.254 16:26:55 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:21.254 16:26:55 -- nvmf/common.sh@120 -- # set +e 00:15:21.254 16:26:55 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:21.254 16:26:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:21.254 rmmod nvme_tcp 00:15:21.254 rmmod nvme_fabrics 00:15:21.254 rmmod nvme_keyring 00:15:21.254 16:26:55 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:21.254 16:26:55 -- nvmf/common.sh@124 -- # set -e 00:15:21.254 16:26:55 -- nvmf/common.sh@125 -- # return 0 00:15:21.254 16:26:55 -- nvmf/common.sh@478 -- # '[' -n 80314 ']' 00:15:21.254 16:26:55 -- nvmf/common.sh@479 -- # killprocess 80314 00:15:21.254 16:26:55 -- common/autotest_common.sh@936 -- # '[' -z 80314 ']' 00:15:21.254 16:26:55 -- common/autotest_common.sh@940 -- # kill -0 80314 00:15:21.254 16:26:55 -- common/autotest_common.sh@941 -- # uname 00:15:21.254 16:26:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:21.254 16:26:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80314 00:15:21.254 16:26:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:21.254 16:26:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:21.254 killing process with pid 80314 00:15:21.254 16:26:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80314' 00:15:21.254 16:26:55 -- common/autotest_common.sh@955 -- # kill 80314 00:15:21.254 [2024-04-17 16:26:55.267435] app.c: 930:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:15:21.254 16:26:55 -- common/autotest_common.sh@960 -- # wait 80314 00:15:21.820 16:26:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:21.820 16:26:55 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:21.820 16:26:55 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:21.820 16:26:55 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:21.820 16:26:55 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:21.820 16:26:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.820 16:26:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:21.820 16:26:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.820 16:26:55 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:21.820 ************************************ 00:15:21.820 END TEST nvmf_identify 00:15:21.820 ************************************ 00:15:21.820 00:15:21.820 real 0m2.659s 00:15:21.820 user 0m7.345s 00:15:21.820 sys 0m0.690s 00:15:21.820 16:26:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:21.820 16:26:55 -- common/autotest_common.sh@10 -- # set +x 00:15:21.820 16:26:55 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:21.820 16:26:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:21.820 16:26:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:21.820 16:26:55 -- common/autotest_common.sh@10 -- # set +x 00:15:21.820 ************************************ 00:15:21.820 START TEST nvmf_perf 00:15:21.820 ************************************ 00:15:21.820 16:26:55 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:21.820 * Looking for test storage... 00:15:21.820 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:21.820 16:26:55 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:21.820 16:26:55 -- nvmf/common.sh@7 -- # uname -s 00:15:21.820 16:26:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:21.820 16:26:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:21.820 16:26:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:21.820 16:26:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:21.820 16:26:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:21.820 16:26:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:21.820 16:26:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:21.820 16:26:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:21.820 16:26:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:21.820 16:26:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:21.820 16:26:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:15:21.820 16:26:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:15:21.820 16:26:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:21.820 16:26:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:21.820 16:26:55 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:21.820 16:26:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:21.820 16:26:55 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:21.820 16:26:55 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:21.820 16:26:55 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:21.820 16:26:55 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:21.821 16:26:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.821 16:26:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.821 16:26:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.821 16:26:55 -- paths/export.sh@5 -- # export PATH 00:15:21.821 16:26:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.821 16:26:55 -- nvmf/common.sh@47 -- # : 0 00:15:21.821 16:26:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:21.821 16:26:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:21.821 16:26:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:21.821 16:26:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:21.821 16:26:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:21.821 16:26:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:21.821 16:26:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:21.821 16:26:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:21.821 16:26:55 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:21.821 16:26:55 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:21.821 16:26:55 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:21.821 16:26:55 -- host/perf.sh@17 -- # nvmftestinit 00:15:21.821 16:26:55 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:21.821 16:26:55 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:21.821 16:26:55 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:21.821 16:26:55 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:21.821 16:26:55 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:21.821 16:26:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.821 16:26:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:21.821 16:26:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.821 16:26:55 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:15:21.821 16:26:55 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:15:21.821 16:26:55 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:15:21.821 16:26:55 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:15:21.821 16:26:55 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:15:21.821 16:26:55 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:15:21.821 16:26:55 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:21.821 16:26:55 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:21.821 16:26:55 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:21.821 16:26:55 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:21.821 16:26:55 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:21.821 16:26:55 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:21.821 16:26:55 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:21.821 16:26:55 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:21.821 16:26:55 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:21.821 16:26:55 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:21.821 16:26:55 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:21.821 16:26:55 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:21.821 16:26:55 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:22.080 16:26:55 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:22.080 Cannot find device "nvmf_tgt_br" 00:15:22.080 16:26:55 -- nvmf/common.sh@155 -- # true 00:15:22.080 16:26:55 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:22.080 Cannot find device "nvmf_tgt_br2" 00:15:22.080 16:26:55 -- nvmf/common.sh@156 -- # true 00:15:22.080 16:26:55 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:22.080 16:26:55 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:22.080 Cannot find device "nvmf_tgt_br" 00:15:22.080 16:26:55 -- nvmf/common.sh@158 -- # true 00:15:22.080 16:26:55 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:22.080 Cannot find device "nvmf_tgt_br2" 00:15:22.080 16:26:55 -- nvmf/common.sh@159 -- # true 00:15:22.080 16:26:55 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:22.080 16:26:55 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:22.080 16:26:55 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:22.080 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:22.080 16:26:55 -- nvmf/common.sh@162 -- # true 00:15:22.080 16:26:55 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:22.080 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:22.080 16:26:55 -- nvmf/common.sh@163 -- # true 00:15:22.080 16:26:55 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:22.080 16:26:55 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:22.080 16:26:55 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:22.080 16:26:55 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:22.080 16:26:55 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:22.080 16:26:56 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:22.080 16:26:56 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:22.080 16:26:56 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:22.080 16:26:56 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:22.080 16:26:56 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:22.080 16:26:56 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:22.080 16:26:56 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:22.080 16:26:56 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:22.080 16:26:56 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:22.080 16:26:56 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:22.080 16:26:56 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:22.080 16:26:56 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:22.080 16:26:56 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:22.080 16:26:56 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:22.080 16:26:56 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:22.339 16:26:56 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:22.339 16:26:56 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:22.339 16:26:56 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:22.339 16:26:56 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:22.339 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:22.339 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:15:22.339 00:15:22.339 --- 10.0.0.2 ping statistics --- 00:15:22.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.339 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:15:22.339 16:26:56 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:22.339 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:22.339 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:15:22.339 00:15:22.339 --- 10.0.0.3 ping statistics --- 00:15:22.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.339 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:15:22.339 16:26:56 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:22.339 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:22.339 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:15:22.339 00:15:22.339 --- 10.0.0.1 ping statistics --- 00:15:22.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.339 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:15:22.339 16:26:56 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:22.339 16:26:56 -- nvmf/common.sh@422 -- # return 0 00:15:22.339 16:26:56 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:22.339 16:26:56 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:22.339 16:26:56 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:22.339 16:26:56 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:22.339 16:26:56 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:22.339 16:26:56 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:22.339 16:26:56 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:22.339 16:26:56 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:15:22.339 16:26:56 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:22.339 16:26:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:22.339 16:26:56 -- common/autotest_common.sh@10 -- # set +x 00:15:22.339 16:26:56 -- nvmf/common.sh@470 -- # nvmfpid=80539 00:15:22.339 16:26:56 -- nvmf/common.sh@471 -- # waitforlisten 80539 00:15:22.339 16:26:56 -- common/autotest_common.sh@817 -- # '[' -z 80539 ']' 00:15:22.339 16:26:56 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:22.339 16:26:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.339 16:26:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:22.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.339 16:26:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.339 16:26:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:22.339 16:26:56 -- common/autotest_common.sh@10 -- # set +x 00:15:22.339 [2024-04-17 16:26:56.244645] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:15:22.339 [2024-04-17 16:26:56.244745] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:22.339 [2024-04-17 16:26:56.378479] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:22.598 [2024-04-17 16:26:56.496523] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:22.598 [2024-04-17 16:26:56.496584] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:22.598 [2024-04-17 16:26:56.496597] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:22.598 [2024-04-17 16:26:56.496605] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:22.598 [2024-04-17 16:26:56.496612] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:22.598 [2024-04-17 16:26:56.496696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:22.598 [2024-04-17 16:26:56.496830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:22.598 [2024-04-17 16:26:56.497457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:22.598 [2024-04-17 16:26:56.497547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.532 16:26:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:23.532 16:26:57 -- common/autotest_common.sh@850 -- # return 0 00:15:23.532 16:26:57 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:23.532 16:26:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:23.532 16:26:57 -- common/autotest_common.sh@10 -- # set +x 00:15:23.532 16:26:57 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:23.532 16:26:57 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:23.532 16:26:57 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:15:23.793 16:26:57 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:15:23.793 16:26:57 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:15:24.063 16:26:58 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:15:24.063 16:26:58 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:24.628 16:26:58 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:15:24.628 16:26:58 -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:15:24.628 16:26:58 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:15:24.628 16:26:58 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:15:24.628 16:26:58 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:24.629 [2024-04-17 16:26:58.638403] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:24.629 16:26:58 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:24.886 16:26:58 -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:24.886 16:26:58 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:25.143 16:26:59 -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:25.143 16:26:59 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:15:25.401 16:26:59 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:25.659 [2024-04-17 16:26:59.563558] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:25.659 16:26:59 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:25.917 16:26:59 -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:15:25.917 16:26:59 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:25.917 16:26:59 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:15:25.917 16:26:59 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:27.291 Initializing NVMe Controllers 00:15:27.291 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:27.291 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:15:27.291 Initialization complete. Launching workers. 00:15:27.291 ======================================================== 00:15:27.291 Latency(us) 00:15:27.291 Device Information : IOPS MiB/s Average min max 00:15:27.291 PCIE (0000:00:10.0) NSID 1 from core 0: 24416.00 95.38 1310.35 326.76 7005.40 00:15:27.291 ======================================================== 00:15:27.291 Total : 24416.00 95.38 1310.35 326.76 7005.40 00:15:27.291 00:15:27.291 16:27:00 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:28.687 Initializing NVMe Controllers 00:15:28.687 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:28.687 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:28.687 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:28.687 Initialization complete. Launching workers. 00:15:28.687 ======================================================== 00:15:28.687 Latency(us) 00:15:28.687 Device Information : IOPS MiB/s Average min max 00:15:28.687 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3275.95 12.80 304.92 119.74 4288.27 00:15:28.687 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.51 0.48 8160.20 7949.79 12034.87 00:15:28.687 ======================================================== 00:15:28.687 Total : 3399.46 13.28 590.32 119.74 12034.87 00:15:28.687 00:15:28.687 16:27:02 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:29.620 Initializing NVMe Controllers 00:15:29.620 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:29.620 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:29.620 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:29.620 Initialization complete. Launching workers. 00:15:29.620 ======================================================== 00:15:29.620 Latency(us) 00:15:29.620 Device Information : IOPS MiB/s Average min max 00:15:29.620 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8116.10 31.70 3944.89 737.23 8174.88 00:15:29.620 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2690.04 10.51 12020.14 6238.81 20310.79 00:15:29.620 ======================================================== 00:15:29.620 Total : 10806.14 42.21 5955.12 737.23 20310.79 00:15:29.620 00:15:29.620 16:27:03 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:15:29.620 16:27:03 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:32.149 Initializing NVMe Controllers 00:15:32.149 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:32.149 Controller IO queue size 128, less than required. 00:15:32.149 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:32.149 Controller IO queue size 128, less than required. 00:15:32.149 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:32.149 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:32.149 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:32.149 Initialization complete. Launching workers. 00:15:32.149 ======================================================== 00:15:32.149 Latency(us) 00:15:32.149 Device Information : IOPS MiB/s Average min max 00:15:32.150 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1419.96 354.99 91375.97 58226.82 163642.12 00:15:32.150 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 556.98 139.25 239642.71 97003.61 393692.53 00:15:32.150 ======================================================== 00:15:32.150 Total : 1976.94 494.24 133148.64 58226.82 393692.53 00:15:32.150 00:15:32.150 16:27:06 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:15:32.408 No valid NVMe controllers or AIO or URING devices found 00:15:32.408 Initializing NVMe Controllers 00:15:32.408 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:32.408 Controller IO queue size 128, less than required. 00:15:32.408 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:32.408 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:15:32.408 Controller IO queue size 128, less than required. 00:15:32.408 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:32.408 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:15:32.408 WARNING: Some requested NVMe devices were skipped 00:15:32.408 16:27:06 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:15:34.937 Initializing NVMe Controllers 00:15:34.937 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:34.937 Controller IO queue size 128, less than required. 00:15:34.937 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:34.937 Controller IO queue size 128, less than required. 00:15:34.937 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:34.937 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:34.937 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:34.937 Initialization complete. Launching workers. 00:15:34.937 00:15:34.937 ==================== 00:15:34.937 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:15:34.937 TCP transport: 00:15:34.937 polls: 8785 00:15:34.937 idle_polls: 4835 00:15:34.937 sock_completions: 3950 00:15:34.937 nvme_completions: 4229 00:15:34.937 submitted_requests: 6358 00:15:34.937 queued_requests: 1 00:15:34.937 00:15:34.937 ==================== 00:15:34.937 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:15:34.937 TCP transport: 00:15:34.937 polls: 11088 00:15:34.937 idle_polls: 7693 00:15:34.937 sock_completions: 3395 00:15:34.937 nvme_completions: 6427 00:15:34.937 submitted_requests: 9650 00:15:34.937 queued_requests: 1 00:15:34.937 ======================================================== 00:15:34.937 Latency(us) 00:15:34.937 Device Information : IOPS MiB/s Average min max 00:15:34.937 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1056.79 264.20 125080.83 85481.11 216264.25 00:15:34.937 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1606.18 401.55 79941.76 30325.28 129926.49 00:15:34.937 ======================================================== 00:15:34.937 Total : 2662.98 665.74 97855.03 30325.28 216264.25 00:15:34.937 00:15:34.937 16:27:08 -- host/perf.sh@66 -- # sync 00:15:34.937 16:27:08 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:35.503 16:27:09 -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:15:35.503 16:27:09 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:35.503 16:27:09 -- host/perf.sh@114 -- # nvmftestfini 00:15:35.503 16:27:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:35.503 16:27:09 -- nvmf/common.sh@117 -- # sync 00:15:35.503 16:27:09 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:35.503 16:27:09 -- nvmf/common.sh@120 -- # set +e 00:15:35.503 16:27:09 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:35.503 16:27:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:35.503 rmmod nvme_tcp 00:15:35.503 rmmod nvme_fabrics 00:15:35.503 rmmod nvme_keyring 00:15:35.503 16:27:09 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:35.503 16:27:09 -- nvmf/common.sh@124 -- # set -e 00:15:35.503 16:27:09 -- nvmf/common.sh@125 -- # return 0 00:15:35.503 16:27:09 -- nvmf/common.sh@478 -- # '[' -n 80539 ']' 00:15:35.503 16:27:09 -- nvmf/common.sh@479 -- # killprocess 80539 00:15:35.503 16:27:09 -- common/autotest_common.sh@936 -- # '[' -z 80539 ']' 00:15:35.503 16:27:09 -- common/autotest_common.sh@940 -- # kill -0 80539 00:15:35.503 16:27:09 -- common/autotest_common.sh@941 -- # uname 00:15:35.503 16:27:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:35.503 16:27:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80539 00:15:35.503 killing process with pid 80539 00:15:35.503 16:27:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:35.503 16:27:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:35.503 16:27:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80539' 00:15:35.503 16:27:09 -- common/autotest_common.sh@955 -- # kill 80539 00:15:35.503 16:27:09 -- common/autotest_common.sh@960 -- # wait 80539 00:15:36.437 16:27:10 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:36.437 16:27:10 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:36.437 16:27:10 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:36.437 16:27:10 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:36.437 16:27:10 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:36.437 16:27:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:36.437 16:27:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:36.437 16:27:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:36.437 16:27:10 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:36.437 00:15:36.437 real 0m14.433s 00:15:36.437 user 0m53.272s 00:15:36.437 sys 0m3.507s 00:15:36.437 16:27:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:36.437 16:27:10 -- common/autotest_common.sh@10 -- # set +x 00:15:36.437 ************************************ 00:15:36.437 END TEST nvmf_perf 00:15:36.437 ************************************ 00:15:36.437 16:27:10 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:36.437 16:27:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:36.437 16:27:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:36.437 16:27:10 -- common/autotest_common.sh@10 -- # set +x 00:15:36.437 ************************************ 00:15:36.437 START TEST nvmf_fio_host 00:15:36.437 ************************************ 00:15:36.437 16:27:10 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:36.437 * Looking for test storage... 00:15:36.437 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:36.437 16:27:10 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:36.437 16:27:10 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:36.437 16:27:10 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:36.437 16:27:10 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:36.437 16:27:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.437 16:27:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.437 16:27:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.437 16:27:10 -- paths/export.sh@5 -- # export PATH 00:15:36.437 16:27:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.437 16:27:10 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:36.437 16:27:10 -- nvmf/common.sh@7 -- # uname -s 00:15:36.437 16:27:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:36.437 16:27:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:36.437 16:27:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:36.437 16:27:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:36.437 16:27:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:36.437 16:27:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:36.437 16:27:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:36.437 16:27:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:36.437 16:27:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:36.437 16:27:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:36.437 16:27:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:15:36.437 16:27:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:15:36.437 16:27:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:36.437 16:27:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:36.437 16:27:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:36.437 16:27:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:36.437 16:27:10 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:36.437 16:27:10 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:36.437 16:27:10 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:36.437 16:27:10 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:36.437 16:27:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.437 16:27:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.437 16:27:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.437 16:27:10 -- paths/export.sh@5 -- # export PATH 00:15:36.437 16:27:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.437 16:27:10 -- nvmf/common.sh@47 -- # : 0 00:15:36.437 16:27:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:36.437 16:27:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:36.437 16:27:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:36.437 16:27:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:36.437 16:27:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:36.437 16:27:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:36.437 16:27:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:36.437 16:27:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:36.437 16:27:10 -- host/fio.sh@12 -- # nvmftestinit 00:15:36.438 16:27:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:36.438 16:27:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:36.438 16:27:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:36.438 16:27:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:36.438 16:27:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:36.438 16:27:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:36.438 16:27:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:36.438 16:27:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:36.438 16:27:10 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:15:36.438 16:27:10 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:15:36.438 16:27:10 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:15:36.438 16:27:10 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:15:36.438 16:27:10 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:15:36.438 16:27:10 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:15:36.438 16:27:10 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:36.438 16:27:10 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:36.438 16:27:10 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:36.438 16:27:10 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:36.438 16:27:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:36.438 16:27:10 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:36.438 16:27:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:36.438 16:27:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:36.438 16:27:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:36.438 16:27:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:36.438 16:27:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:36.438 16:27:10 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:36.438 16:27:10 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:36.438 16:27:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:36.438 Cannot find device "nvmf_tgt_br" 00:15:36.438 16:27:10 -- nvmf/common.sh@155 -- # true 00:15:36.438 16:27:10 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:36.438 Cannot find device "nvmf_tgt_br2" 00:15:36.438 16:27:10 -- nvmf/common.sh@156 -- # true 00:15:36.438 16:27:10 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:36.438 16:27:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:36.438 Cannot find device "nvmf_tgt_br" 00:15:36.438 16:27:10 -- nvmf/common.sh@158 -- # true 00:15:36.438 16:27:10 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:36.438 Cannot find device "nvmf_tgt_br2" 00:15:36.438 16:27:10 -- nvmf/common.sh@159 -- # true 00:15:36.438 16:27:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:36.696 16:27:10 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:36.696 16:27:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:36.696 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:36.696 16:27:10 -- nvmf/common.sh@162 -- # true 00:15:36.696 16:27:10 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:36.696 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:36.696 16:27:10 -- nvmf/common.sh@163 -- # true 00:15:36.696 16:27:10 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:36.696 16:27:10 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:36.696 16:27:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:36.696 16:27:10 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:36.696 16:27:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:36.696 16:27:10 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:36.696 16:27:10 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:36.696 16:27:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:36.696 16:27:10 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:36.696 16:27:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:36.696 16:27:10 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:36.696 16:27:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:36.696 16:27:10 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:36.696 16:27:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:36.696 16:27:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:36.696 16:27:10 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:36.696 16:27:10 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:36.696 16:27:10 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:36.697 16:27:10 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:36.697 16:27:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:36.697 16:27:10 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:36.697 16:27:10 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:36.697 16:27:10 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:36.697 16:27:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:36.697 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:36.697 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 00:15:36.697 00:15:36.697 --- 10.0.0.2 ping statistics --- 00:15:36.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.697 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:15:36.697 16:27:10 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:36.697 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:36.697 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:15:36.697 00:15:36.697 --- 10.0.0.3 ping statistics --- 00:15:36.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.697 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:15:36.697 16:27:10 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:36.955 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:36.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:15:36.955 00:15:36.955 --- 10.0.0.1 ping statistics --- 00:15:36.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.955 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:15:36.955 16:27:10 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:36.955 16:27:10 -- nvmf/common.sh@422 -- # return 0 00:15:36.955 16:27:10 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:36.955 16:27:10 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:36.955 16:27:10 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:36.955 16:27:10 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:36.955 16:27:10 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:36.955 16:27:10 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:36.955 16:27:10 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:36.955 16:27:10 -- host/fio.sh@14 -- # [[ y != y ]] 00:15:36.955 16:27:10 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:15:36.955 16:27:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:36.955 16:27:10 -- common/autotest_common.sh@10 -- # set +x 00:15:36.955 16:27:10 -- host/fio.sh@22 -- # nvmfpid=81020 00:15:36.955 16:27:10 -- host/fio.sh@21 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:36.955 16:27:10 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:36.956 16:27:10 -- host/fio.sh@26 -- # waitforlisten 81020 00:15:36.956 16:27:10 -- common/autotest_common.sh@817 -- # '[' -z 81020 ']' 00:15:36.956 16:27:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.956 16:27:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:36.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.956 16:27:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.956 16:27:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:36.956 16:27:10 -- common/autotest_common.sh@10 -- # set +x 00:15:36.956 [2024-04-17 16:27:10.834211] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:15:36.956 [2024-04-17 16:27:10.834320] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.956 [2024-04-17 16:27:10.977974] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:37.215 [2024-04-17 16:27:11.100049] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:37.215 [2024-04-17 16:27:11.100313] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:37.215 [2024-04-17 16:27:11.100334] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:37.215 [2024-04-17 16:27:11.100343] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:37.215 [2024-04-17 16:27:11.100350] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:37.215 [2024-04-17 16:27:11.100550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:37.215 [2024-04-17 16:27:11.100653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:37.215 [2024-04-17 16:27:11.100838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:37.215 [2024-04-17 16:27:11.100843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.781 16:27:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:37.781 16:27:11 -- common/autotest_common.sh@850 -- # return 0 00:15:37.781 16:27:11 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:37.781 16:27:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:37.781 16:27:11 -- common/autotest_common.sh@10 -- # set +x 00:15:38.040 [2024-04-17 16:27:11.826535] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:38.040 16:27:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:38.040 16:27:11 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:15:38.040 16:27:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:38.040 16:27:11 -- common/autotest_common.sh@10 -- # set +x 00:15:38.040 16:27:11 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:38.040 16:27:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:38.040 16:27:11 -- common/autotest_common.sh@10 -- # set +x 00:15:38.040 Malloc1 00:15:38.040 16:27:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:38.040 16:27:11 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:38.040 16:27:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:38.040 16:27:11 -- common/autotest_common.sh@10 -- # set +x 00:15:38.040 16:27:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:38.040 16:27:11 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:38.040 16:27:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:38.040 16:27:11 -- common/autotest_common.sh@10 -- # set +x 00:15:38.040 16:27:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:38.040 16:27:11 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:38.040 16:27:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:38.040 16:27:11 -- common/autotest_common.sh@10 -- # set +x 00:15:38.040 [2024-04-17 16:27:11.936193] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:38.040 16:27:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:38.040 16:27:11 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:38.040 16:27:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:38.040 16:27:11 -- common/autotest_common.sh@10 -- # set +x 00:15:38.040 16:27:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:38.040 16:27:11 -- host/fio.sh@36 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:38.040 16:27:11 -- host/fio.sh@39 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:38.040 16:27:11 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:38.040 16:27:11 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:15:38.040 16:27:11 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:38.040 16:27:11 -- common/autotest_common.sh@1325 -- # local sanitizers 00:15:38.040 16:27:11 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:38.040 16:27:11 -- common/autotest_common.sh@1327 -- # shift 00:15:38.040 16:27:11 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:15:38.040 16:27:11 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:15:38.040 16:27:11 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:38.040 16:27:11 -- common/autotest_common.sh@1331 -- # grep libasan 00:15:38.040 16:27:11 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:15:38.040 16:27:11 -- common/autotest_common.sh@1331 -- # asan_lib= 00:15:38.040 16:27:11 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:15:38.040 16:27:11 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:15:38.040 16:27:11 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:38.040 16:27:11 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:15:38.040 16:27:11 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:15:38.040 16:27:11 -- common/autotest_common.sh@1331 -- # asan_lib= 00:15:38.040 16:27:11 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:15:38.040 16:27:11 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:38.040 16:27:11 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:38.299 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:38.299 fio-3.35 00:15:38.299 Starting 1 thread 00:15:40.830 00:15:40.830 test: (groupid=0, jobs=1): err= 0: pid=81103: Wed Apr 17 16:27:14 2024 00:15:40.830 read: IOPS=8520, BW=33.3MiB/s (34.9MB/s)(66.8MiB/2007msec) 00:15:40.830 slat (usec): min=2, max=254, avg= 2.56, stdev= 2.56 00:15:40.830 clat (usec): min=2371, max=14507, avg=7862.39, stdev=594.99 00:15:40.830 lat (usec): min=2415, max=14509, avg=7864.95, stdev=594.78 00:15:40.830 clat percentiles (usec): 00:15:40.830 | 1.00th=[ 6718], 5.00th=[ 7046], 10.00th=[ 7177], 20.00th=[ 7439], 00:15:40.830 | 30.00th=[ 7570], 40.00th=[ 7701], 50.00th=[ 7832], 60.00th=[ 7963], 00:15:40.830 | 70.00th=[ 8094], 80.00th=[ 8291], 90.00th=[ 8586], 95.00th=[ 8717], 00:15:40.830 | 99.00th=[ 9241], 99.50th=[ 9503], 99.90th=[12649], 99.95th=[13042], 00:15:40.830 | 99.99th=[14091] 00:15:40.830 bw ( KiB/s): min=33264, max=35056, per=99.95%, avg=34064.00, stdev=751.38, samples=4 00:15:40.830 iops : min= 8316, max= 8764, avg=8516.00, stdev=187.84, samples=4 00:15:40.830 write: IOPS=8517, BW=33.3MiB/s (34.9MB/s)(66.8MiB/2007msec); 0 zone resets 00:15:40.830 slat (usec): min=2, max=181, avg= 2.71, stdev= 2.01 00:15:40.830 clat (usec): min=1623, max=13758, avg=7116.97, stdev=528.66 00:15:40.830 lat (usec): min=1634, max=13761, avg=7119.68, stdev=528.54 00:15:40.830 clat percentiles (usec): 00:15:40.830 | 1.00th=[ 6063], 5.00th=[ 6390], 10.00th=[ 6587], 20.00th=[ 6783], 00:15:40.830 | 30.00th=[ 6915], 40.00th=[ 6980], 50.00th=[ 7111], 60.00th=[ 7242], 00:15:40.830 | 70.00th=[ 7373], 80.00th=[ 7504], 90.00th=[ 7701], 95.00th=[ 7898], 00:15:40.830 | 99.00th=[ 8291], 99.50th=[ 8586], 99.90th=[11338], 99.95th=[11994], 00:15:40.830 | 99.99th=[13698] 00:15:40.830 bw ( KiB/s): min=33144, max=34696, per=100.00%, avg=34072.00, stdev=657.59, samples=4 00:15:40.830 iops : min= 8286, max= 8674, avg=8518.00, stdev=164.40, samples=4 00:15:40.830 lat (msec) : 2=0.02%, 4=0.12%, 10=99.63%, 20=0.23% 00:15:40.830 cpu : usr=65.45%, sys=24.83%, ctx=79, majf=0, minf=6 00:15:40.830 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:40.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:40.830 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:40.830 issued rwts: total=17100,17095,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:40.830 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:40.830 00:15:40.830 Run status group 0 (all jobs): 00:15:40.830 READ: bw=33.3MiB/s (34.9MB/s), 33.3MiB/s-33.3MiB/s (34.9MB/s-34.9MB/s), io=66.8MiB (70.0MB), run=2007-2007msec 00:15:40.830 WRITE: bw=33.3MiB/s (34.9MB/s), 33.3MiB/s-33.3MiB/s (34.9MB/s-34.9MB/s), io=66.8MiB (70.0MB), run=2007-2007msec 00:15:40.830 16:27:14 -- host/fio.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:40.830 16:27:14 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:40.830 16:27:14 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:15:40.830 16:27:14 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:40.830 16:27:14 -- common/autotest_common.sh@1325 -- # local sanitizers 00:15:40.830 16:27:14 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:40.830 16:27:14 -- common/autotest_common.sh@1327 -- # shift 00:15:40.830 16:27:14 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:15:40.830 16:27:14 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:15:40.830 16:27:14 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:40.830 16:27:14 -- common/autotest_common.sh@1331 -- # grep libasan 00:15:40.830 16:27:14 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:15:40.830 16:27:14 -- common/autotest_common.sh@1331 -- # asan_lib= 00:15:40.831 16:27:14 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:15:40.831 16:27:14 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:15:40.831 16:27:14 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:40.831 16:27:14 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:15:40.831 16:27:14 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:15:40.831 16:27:14 -- common/autotest_common.sh@1331 -- # asan_lib= 00:15:40.831 16:27:14 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:15:40.831 16:27:14 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:40.831 16:27:14 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:40.831 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:15:40.831 fio-3.35 00:15:40.831 Starting 1 thread 00:15:43.364 00:15:43.364 test: (groupid=0, jobs=1): err= 0: pid=81147: Wed Apr 17 16:27:16 2024 00:15:43.364 read: IOPS=7765, BW=121MiB/s (127MB/s)(244MiB/2007msec) 00:15:43.364 slat (usec): min=3, max=123, avg= 3.96, stdev= 2.05 00:15:43.364 clat (usec): min=2437, max=18996, avg=9880.30, stdev=2571.72 00:15:43.364 lat (usec): min=2440, max=19000, avg=9884.27, stdev=2571.83 00:15:43.364 clat percentiles (usec): 00:15:43.364 | 1.00th=[ 5014], 5.00th=[ 6128], 10.00th=[ 6783], 20.00th=[ 7635], 00:15:43.364 | 30.00th=[ 8356], 40.00th=[ 8979], 50.00th=[ 9765], 60.00th=[10552], 00:15:43.364 | 70.00th=[11207], 80.00th=[11731], 90.00th=[13173], 95.00th=[14484], 00:15:43.364 | 99.00th=[17171], 99.50th=[17957], 99.90th=[18482], 99.95th=[18482], 00:15:43.364 | 99.99th=[19006] 00:15:43.364 bw ( KiB/s): min=58464, max=67328, per=49.95%, avg=62064.00, stdev=3845.02, samples=4 00:15:43.364 iops : min= 3654, max= 4208, avg=3879.00, stdev=240.31, samples=4 00:15:43.364 write: IOPS=4441, BW=69.4MiB/s (72.8MB/s)(127MiB/1824msec); 0 zone resets 00:15:43.364 slat (usec): min=35, max=358, avg=40.15, stdev= 8.33 00:15:43.364 clat (usec): min=4750, max=19856, avg=11947.59, stdev=2092.33 00:15:43.364 lat (usec): min=4786, max=19897, avg=11987.75, stdev=2093.65 00:15:43.364 clat percentiles (usec): 00:15:43.364 | 1.00th=[ 7373], 5.00th=[ 8848], 10.00th=[ 9372], 20.00th=[10290], 00:15:43.364 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11731], 60.00th=[12256], 00:15:43.364 | 70.00th=[12911], 80.00th=[13566], 90.00th=[14746], 95.00th=[15795], 00:15:43.364 | 99.00th=[17171], 99.50th=[17695], 99.90th=[18482], 99.95th=[18482], 00:15:43.364 | 99.99th=[19792] 00:15:43.364 bw ( KiB/s): min=59872, max=71040, per=90.47%, avg=64288.00, stdev=5108.45, samples=4 00:15:43.364 iops : min= 3742, max= 4440, avg=4018.00, stdev=319.28, samples=4 00:15:43.364 lat (msec) : 4=0.17%, 10=40.61%, 20=59.22% 00:15:43.364 cpu : usr=73.59%, sys=17.19%, ctx=5, majf=0, minf=19 00:15:43.364 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:15:43.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:43.364 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:43.364 issued rwts: total=15585,8101,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:43.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:43.364 00:15:43.364 Run status group 0 (all jobs): 00:15:43.364 READ: bw=121MiB/s (127MB/s), 121MiB/s-121MiB/s (127MB/s-127MB/s), io=244MiB (255MB), run=2007-2007msec 00:15:43.364 WRITE: bw=69.4MiB/s (72.8MB/s), 69.4MiB/s-69.4MiB/s (72.8MB/s-72.8MB/s), io=127MiB (133MB), run=1824-1824msec 00:15:43.364 16:27:16 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:43.364 16:27:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:43.364 16:27:16 -- common/autotest_common.sh@10 -- # set +x 00:15:43.364 16:27:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:43.364 16:27:16 -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:15:43.364 16:27:16 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:15:43.364 16:27:16 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:15:43.364 16:27:16 -- host/fio.sh@84 -- # nvmftestfini 00:15:43.364 16:27:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:43.364 16:27:16 -- nvmf/common.sh@117 -- # sync 00:15:43.364 16:27:16 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:43.364 16:27:16 -- nvmf/common.sh@120 -- # set +e 00:15:43.364 16:27:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:43.364 16:27:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:43.364 rmmod nvme_tcp 00:15:43.364 rmmod nvme_fabrics 00:15:43.364 rmmod nvme_keyring 00:15:43.364 16:27:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:43.364 16:27:17 -- nvmf/common.sh@124 -- # set -e 00:15:43.364 16:27:17 -- nvmf/common.sh@125 -- # return 0 00:15:43.364 16:27:17 -- nvmf/common.sh@478 -- # '[' -n 81020 ']' 00:15:43.364 16:27:17 -- nvmf/common.sh@479 -- # killprocess 81020 00:15:43.364 16:27:17 -- common/autotest_common.sh@936 -- # '[' -z 81020 ']' 00:15:43.364 16:27:17 -- common/autotest_common.sh@940 -- # kill -0 81020 00:15:43.364 16:27:17 -- common/autotest_common.sh@941 -- # uname 00:15:43.364 16:27:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:43.364 16:27:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81020 00:15:43.364 killing process with pid 81020 00:15:43.364 16:27:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:43.364 16:27:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:43.364 16:27:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81020' 00:15:43.365 16:27:17 -- common/autotest_common.sh@955 -- # kill 81020 00:15:43.365 16:27:17 -- common/autotest_common.sh@960 -- # wait 81020 00:15:43.365 16:27:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:43.365 16:27:17 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:43.365 16:27:17 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:43.365 16:27:17 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:43.365 16:27:17 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:43.365 16:27:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.365 16:27:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:43.365 16:27:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:43.365 16:27:17 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:43.365 ************************************ 00:15:43.365 END TEST nvmf_fio_host 00:15:43.365 ************************************ 00:15:43.365 00:15:43.365 real 0m7.109s 00:15:43.365 user 0m27.393s 00:15:43.365 sys 0m2.099s 00:15:43.365 16:27:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:43.365 16:27:17 -- common/autotest_common.sh@10 -- # set +x 00:15:43.624 16:27:17 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:43.624 16:27:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:43.624 16:27:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:43.624 16:27:17 -- common/autotest_common.sh@10 -- # set +x 00:15:43.624 ************************************ 00:15:43.624 START TEST nvmf_failover 00:15:43.624 ************************************ 00:15:43.624 16:27:17 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:43.624 * Looking for test storage... 00:15:43.624 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:43.624 16:27:17 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:43.624 16:27:17 -- nvmf/common.sh@7 -- # uname -s 00:15:43.624 16:27:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:43.624 16:27:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:43.624 16:27:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:43.624 16:27:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:43.624 16:27:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:43.624 16:27:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:43.624 16:27:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:43.624 16:27:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:43.624 16:27:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:43.624 16:27:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:43.624 16:27:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:15:43.624 16:27:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:15:43.624 16:27:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:43.624 16:27:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:43.624 16:27:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:43.624 16:27:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:43.624 16:27:17 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:43.624 16:27:17 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:43.624 16:27:17 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:43.624 16:27:17 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:43.624 16:27:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.624 16:27:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.624 16:27:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.624 16:27:17 -- paths/export.sh@5 -- # export PATH 00:15:43.624 16:27:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.624 16:27:17 -- nvmf/common.sh@47 -- # : 0 00:15:43.624 16:27:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:43.624 16:27:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:43.624 16:27:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:43.624 16:27:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:43.624 16:27:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:43.624 16:27:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:43.624 16:27:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:43.624 16:27:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:43.624 16:27:17 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:43.624 16:27:17 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:43.624 16:27:17 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:43.624 16:27:17 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:43.624 16:27:17 -- host/failover.sh@18 -- # nvmftestinit 00:15:43.624 16:27:17 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:43.624 16:27:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:43.624 16:27:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:43.624 16:27:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:43.624 16:27:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:43.624 16:27:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.624 16:27:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:43.624 16:27:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:43.624 16:27:17 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:15:43.624 16:27:17 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:15:43.624 16:27:17 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:15:43.624 16:27:17 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:15:43.624 16:27:17 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:15:43.624 16:27:17 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:15:43.624 16:27:17 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:43.624 16:27:17 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:43.625 16:27:17 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:43.625 16:27:17 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:43.625 16:27:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:43.625 16:27:17 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:43.625 16:27:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:43.625 16:27:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:43.625 16:27:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:43.625 16:27:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:43.625 16:27:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:43.625 16:27:17 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:43.625 16:27:17 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:43.882 16:27:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:43.882 Cannot find device "nvmf_tgt_br" 00:15:43.882 16:27:17 -- nvmf/common.sh@155 -- # true 00:15:43.882 16:27:17 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:43.882 Cannot find device "nvmf_tgt_br2" 00:15:43.882 16:27:17 -- nvmf/common.sh@156 -- # true 00:15:43.882 16:27:17 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:43.882 16:27:17 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:43.882 Cannot find device "nvmf_tgt_br" 00:15:43.882 16:27:17 -- nvmf/common.sh@158 -- # true 00:15:43.882 16:27:17 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:43.882 Cannot find device "nvmf_tgt_br2" 00:15:43.882 16:27:17 -- nvmf/common.sh@159 -- # true 00:15:43.882 16:27:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:43.882 16:27:17 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:43.882 16:27:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:43.882 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:43.882 16:27:17 -- nvmf/common.sh@162 -- # true 00:15:43.882 16:27:17 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:43.882 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:43.882 16:27:17 -- nvmf/common.sh@163 -- # true 00:15:43.882 16:27:17 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:43.882 16:27:17 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:43.882 16:27:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:43.882 16:27:17 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:43.883 16:27:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:43.883 16:27:17 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:43.883 16:27:17 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:43.883 16:27:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:43.883 16:27:17 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:43.883 16:27:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:43.883 16:27:17 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:43.883 16:27:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:43.883 16:27:17 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:43.883 16:27:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:43.883 16:27:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:43.883 16:27:17 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:44.141 16:27:17 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:44.141 16:27:17 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:44.141 16:27:17 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:44.141 16:27:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:44.141 16:27:17 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:44.141 16:27:17 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:44.141 16:27:17 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:44.141 16:27:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:44.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:44.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:15:44.141 00:15:44.141 --- 10.0.0.2 ping statistics --- 00:15:44.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.141 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:15:44.141 16:27:17 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:44.141 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:44.141 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:15:44.141 00:15:44.141 --- 10.0.0.3 ping statistics --- 00:15:44.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.141 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:15:44.141 16:27:17 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:44.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:44.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:15:44.141 00:15:44.141 --- 10.0.0.1 ping statistics --- 00:15:44.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.141 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:15:44.141 16:27:18 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:44.141 16:27:18 -- nvmf/common.sh@422 -- # return 0 00:15:44.142 16:27:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:44.142 16:27:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:44.142 16:27:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:44.142 16:27:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:44.142 16:27:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:44.142 16:27:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:44.142 16:27:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:44.142 16:27:18 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:15:44.142 16:27:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:44.142 16:27:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:44.142 16:27:18 -- common/autotest_common.sh@10 -- # set +x 00:15:44.142 16:27:18 -- nvmf/common.sh@470 -- # nvmfpid=81360 00:15:44.142 16:27:18 -- nvmf/common.sh@471 -- # waitforlisten 81360 00:15:44.142 16:27:18 -- common/autotest_common.sh@817 -- # '[' -z 81360 ']' 00:15:44.142 16:27:18 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:44.142 16:27:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.142 16:27:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:44.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.142 16:27:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.142 16:27:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:44.142 16:27:18 -- common/autotest_common.sh@10 -- # set +x 00:15:44.142 [2024-04-17 16:27:18.085840] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:15:44.142 [2024-04-17 16:27:18.086727] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:44.400 [2024-04-17 16:27:18.230607] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:44.400 [2024-04-17 16:27:18.368004] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:44.400 [2024-04-17 16:27:18.368394] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:44.400 [2024-04-17 16:27:18.368525] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:44.400 [2024-04-17 16:27:18.368608] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:44.400 [2024-04-17 16:27:18.368691] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:44.400 [2024-04-17 16:27:18.368933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:44.400 [2024-04-17 16:27:18.369206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:44.400 [2024-04-17 16:27:18.369211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:45.337 16:27:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:45.337 16:27:19 -- common/autotest_common.sh@850 -- # return 0 00:15:45.337 16:27:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:45.337 16:27:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:45.337 16:27:19 -- common/autotest_common.sh@10 -- # set +x 00:15:45.337 16:27:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:45.337 16:27:19 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:45.597 [2024-04-17 16:27:19.454688] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:45.597 16:27:19 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:45.855 Malloc0 00:15:45.855 16:27:19 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:46.114 16:27:19 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:46.373 16:27:20 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:46.631 [2024-04-17 16:27:20.562273] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:46.631 16:27:20 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:46.889 [2024-04-17 16:27:20.814719] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:46.889 16:27:20 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:47.148 [2024-04-17 16:27:21.103143] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:47.148 16:27:21 -- host/failover.sh@31 -- # bdevperf_pid=81476 00:15:47.148 16:27:21 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:15:47.148 16:27:21 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:47.148 16:27:21 -- host/failover.sh@34 -- # waitforlisten 81476 /var/tmp/bdevperf.sock 00:15:47.148 16:27:21 -- common/autotest_common.sh@817 -- # '[' -z 81476 ']' 00:15:47.148 16:27:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:47.148 16:27:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:47.148 16:27:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:47.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:47.148 16:27:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:47.148 16:27:21 -- common/autotest_common.sh@10 -- # set +x 00:15:48.526 16:27:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:48.526 16:27:22 -- common/autotest_common.sh@850 -- # return 0 00:15:48.526 16:27:22 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:48.526 NVMe0n1 00:15:48.526 16:27:22 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:48.785 00:15:48.785 16:27:22 -- host/failover.sh@39 -- # run_test_pid=81522 00:15:48.785 16:27:22 -- host/failover.sh@41 -- # sleep 1 00:15:48.785 16:27:22 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:50.162 16:27:23 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:50.162 [2024-04-17 16:27:24.074539] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01c30 is same with the state(5) to be set 00:15:50.162 [2024-04-17 16:27:24.074597] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01c30 is same with the state(5) to be set 00:15:50.162 [2024-04-17 16:27:24.074609] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01c30 is same with the state(5) to be set 00:15:50.162 [2024-04-17 16:27:24.074619] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01c30 is same with the state(5) to be set 00:15:50.162 [2024-04-17 16:27:24.074628] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01c30 is same with the state(5) to be set 00:15:50.162 [2024-04-17 16:27:24.074638] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01c30 is same with the state(5) to be set 00:15:50.162 [2024-04-17 16:27:24.074647] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01c30 is same with the state(5) to be set 00:15:50.162 16:27:24 -- host/failover.sh@45 -- # sleep 3 00:15:53.447 16:27:27 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:53.447 00:15:53.447 16:27:27 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:54.014 16:27:27 -- host/failover.sh@50 -- # sleep 3 00:15:57.300 16:27:30 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:57.300 [2024-04-17 16:27:31.039200] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:57.300 16:27:31 -- host/failover.sh@55 -- # sleep 1 00:15:58.237 16:27:32 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:58.496 [2024-04-17 16:27:32.286159] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.496 [2024-04-17 16:27:32.286207] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.496 [2024-04-17 16:27:32.286245] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.496 [2024-04-17 16:27:32.286254] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.496 [2024-04-17 16:27:32.286271] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.496 [2024-04-17 16:27:32.286280] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.496 [2024-04-17 16:27:32.286289] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.496 [2024-04-17 16:27:32.286297] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.496 [2024-04-17 16:27:32.286306] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.496 [2024-04-17 16:27:32.286314] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.496 [2024-04-17 16:27:32.286322] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.496 [2024-04-17 16:27:32.286330] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.496 [2024-04-17 16:27:32.286352] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.496 [2024-04-17 16:27:32.286360] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.496 [2024-04-17 16:27:32.286368] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.496 [2024-04-17 16:27:32.286375] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.496 [2024-04-17 16:27:32.286383] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.496 [2024-04-17 16:27:32.286391] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.496 [2024-04-17 16:27:32.286399] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.496 [2024-04-17 16:27:32.286406] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.496 [2024-04-17 16:27:32.286414] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.496 [2024-04-17 16:27:32.286421] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.496 [2024-04-17 16:27:32.286429] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.496 [2024-04-17 16:27:32.286437] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.496 [2024-04-17 16:27:32.286445] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.496 [2024-04-17 16:27:32.286468] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.496 [2024-04-17 16:27:32.286493] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.496 [2024-04-17 16:27:32.286501] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.496 [2024-04-17 16:27:32.286509] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.496 [2024-04-17 16:27:32.286518] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.496 [2024-04-17 16:27:32.286526] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.496 [2024-04-17 16:27:32.286534] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.496 [2024-04-17 16:27:32.286543] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.496 [2024-04-17 16:27:32.286553] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.496 [2024-04-17 16:27:32.286562] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.496 [2024-04-17 16:27:32.286571] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.496 [2024-04-17 16:27:32.286579] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.496 [2024-04-17 16:27:32.286587] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.496 [2024-04-17 16:27:32.286596] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.496 [2024-04-17 16:27:32.286603] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.496 [2024-04-17 16:27:32.286612] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.496 [2024-04-17 16:27:32.286620] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.496 [2024-04-17 16:27:32.286628] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.496 [2024-04-17 16:27:32.286636] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.497 [2024-04-17 16:27:32.286644] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.497 [2024-04-17 16:27:32.286652] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.497 [2024-04-17 16:27:32.286660] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.497 [2024-04-17 16:27:32.286668] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.497 [2024-04-17 16:27:32.286676] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.497 [2024-04-17 16:27:32.286685] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.497 [2024-04-17 16:27:32.286692] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.497 [2024-04-17 16:27:32.286700] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.497 [2024-04-17 16:27:32.286708] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.497 [2024-04-17 16:27:32.286716] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.497 [2024-04-17 16:27:32.286734] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.497 [2024-04-17 16:27:32.286742] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.497 [2024-04-17 16:27:32.286758] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.497 [2024-04-17 16:27:32.286767] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.497 [2024-04-17 16:27:32.286776] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.497 [2024-04-17 16:27:32.286784] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.497 [2024-04-17 16:27:32.286792] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.497 [2024-04-17 16:27:32.286809] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.497 [2024-04-17 16:27:32.286817] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.497 [2024-04-17 16:27:32.286825] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.497 [2024-04-17 16:27:32.286833] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.497 [2024-04-17 16:27:32.286855] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.497 [2024-04-17 16:27:32.286872] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.497 [2024-04-17 16:27:32.286881] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.497 [2024-04-17 16:27:32.286889] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.497 [2024-04-17 16:27:32.286898] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a9760 is same with the state(5) to be set 00:15:58.497 16:27:32 -- host/failover.sh@59 -- # wait 81522 00:16:05.081 0 00:16:05.081 16:27:37 -- host/failover.sh@61 -- # killprocess 81476 00:16:05.081 16:27:37 -- common/autotest_common.sh@936 -- # '[' -z 81476 ']' 00:16:05.081 16:27:37 -- common/autotest_common.sh@940 -- # kill -0 81476 00:16:05.081 16:27:37 -- common/autotest_common.sh@941 -- # uname 00:16:05.081 16:27:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:05.081 16:27:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81476 00:16:05.081 killing process with pid 81476 00:16:05.081 16:27:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:05.081 16:27:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:05.081 16:27:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81476' 00:16:05.081 16:27:37 -- common/autotest_common.sh@955 -- # kill 81476 00:16:05.081 16:27:37 -- common/autotest_common.sh@960 -- # wait 81476 00:16:05.081 16:27:38 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:05.081 [2024-04-17 16:27:21.171144] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:16:05.081 [2024-04-17 16:27:21.171268] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81476 ] 00:16:05.081 [2024-04-17 16:27:21.303186] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.081 [2024-04-17 16:27:21.423437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.081 Running I/O for 15 seconds... 00:16:05.081 [2024-04-17 16:27:24.075467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.081 [2024-04-17 16:27:24.075513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.081 [2024-04-17 16:27:24.075540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:73720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.081 [2024-04-17 16:27:24.075556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.081 [2024-04-17 16:27:24.075574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:73728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.081 [2024-04-17 16:27:24.075589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.081 [2024-04-17 16:27:24.075604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:73736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.081 [2024-04-17 16:27:24.075618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.081 [2024-04-17 16:27:24.075634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:73744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.081 [2024-04-17 16:27:24.075648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.081 [2024-04-17 16:27:24.075664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:73752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.081 [2024-04-17 16:27:24.075678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.081 [2024-04-17 16:27:24.075693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:73760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.081 [2024-04-17 16:27:24.075707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.081 [2024-04-17 16:27:24.075722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:73768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.081 [2024-04-17 16:27:24.075736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.081 [2024-04-17 16:27:24.075751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:74032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.081 [2024-04-17 16:27:24.075765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.081 [2024-04-17 16:27:24.075797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:74040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.081 [2024-04-17 16:27:24.075812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.081 [2024-04-17 16:27:24.075828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:74048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.081 [2024-04-17 16:27:24.075842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.081 [2024-04-17 16:27:24.075883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:74056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.081 [2024-04-17 16:27:24.075898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.081 [2024-04-17 16:27:24.075913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.081 [2024-04-17 16:27:24.075927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.081 [2024-04-17 16:27:24.075943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:74072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.081 [2024-04-17 16:27:24.075957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.081 [2024-04-17 16:27:24.075973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:74080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.081 [2024-04-17 16:27:24.075986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.081 [2024-04-17 16:27:24.076002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:74088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.081 [2024-04-17 16:27:24.076015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.081 [2024-04-17 16:27:24.076031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:74096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.081 [2024-04-17 16:27:24.076044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.081 [2024-04-17 16:27:24.076060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:74104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.081 [2024-04-17 16:27:24.076074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.081 [2024-04-17 16:27:24.076089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.081 [2024-04-17 16:27:24.076103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.081 [2024-04-17 16:27:24.076118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:74120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.081 [2024-04-17 16:27:24.076132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.081 [2024-04-17 16:27:24.076148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:74128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.081 [2024-04-17 16:27:24.076162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.081 [2024-04-17 16:27:24.076177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:74136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.081 [2024-04-17 16:27:24.076191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.081 [2024-04-17 16:27:24.076206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:74144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.081 [2024-04-17 16:27:24.076221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.081 [2024-04-17 16:27:24.076236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.081 [2024-04-17 16:27:24.076258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.081 [2024-04-17 16:27:24.076275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:73776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.081 [2024-04-17 16:27:24.076289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.081 [2024-04-17 16:27:24.076304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:74160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.081 [2024-04-17 16:27:24.076318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.081 [2024-04-17 16:27:24.076333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:74168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.081 [2024-04-17 16:27:24.076347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.081 [2024-04-17 16:27:24.076362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.081 [2024-04-17 16:27:24.076376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.081 [2024-04-17 16:27:24.076391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:74184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.081 [2024-04-17 16:27:24.076405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.081 [2024-04-17 16:27:24.076422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:74192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.081 [2024-04-17 16:27:24.076436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.081 [2024-04-17 16:27:24.076451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:74200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.082 [2024-04-17 16:27:24.076466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.082 [2024-04-17 16:27:24.076481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:74208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.082 [2024-04-17 16:27:24.076495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.082 [2024-04-17 16:27:24.076510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:74216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.082 [2024-04-17 16:27:24.076524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.082 [2024-04-17 16:27:24.076539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.082 [2024-04-17 16:27:24.076553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.082 [2024-04-17 16:27:24.076568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:74232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.082 [2024-04-17 16:27:24.076582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.082 [2024-04-17 16:27:24.076597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:74240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.082 [2024-04-17 16:27:24.076611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.082 [2024-04-17 16:27:24.076627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:74248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.082 [2024-04-17 16:27:24.076649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.082 [2024-04-17 16:27:24.076665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:74256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.082 [2024-04-17 16:27:24.076679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.082 [2024-04-17 16:27:24.076696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:74264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.082 [2024-04-17 16:27:24.076710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.082 [2024-04-17 16:27:24.076726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.082 [2024-04-17 16:27:24.076740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.082 [2024-04-17 16:27:24.076755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:74280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.082 [2024-04-17 16:27:24.076768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.082 [2024-04-17 16:27:24.076803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:74288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.082 [2024-04-17 16:27:24.076818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.082 [2024-04-17 16:27:24.076834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.082 [2024-04-17 16:27:24.076848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.082 [2024-04-17 16:27:24.076863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.082 [2024-04-17 16:27:24.076877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.082 [2024-04-17 16:27:24.076892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.082 [2024-04-17 16:27:24.076906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.082 [2024-04-17 16:27:24.076921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:74320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.082 [2024-04-17 16:27:24.076935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.082 [2024-04-17 16:27:24.076950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.082 [2024-04-17 16:27:24.076965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.082 [2024-04-17 16:27:24.076980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.082 [2024-04-17 16:27:24.076995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.082 [2024-04-17 16:27:24.077010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:74344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.082 [2024-04-17 16:27:24.077024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.082 [2024-04-17 16:27:24.077047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.082 [2024-04-17 16:27:24.077062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.082 [2024-04-17 16:27:24.077078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.082 [2024-04-17 16:27:24.077091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.082 [2024-04-17 16:27:24.077106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:74368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.082 [2024-04-17 16:27:24.077120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.082 [2024-04-17 16:27:24.077136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.082 [2024-04-17 16:27:24.077150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.082 [2024-04-17 16:27:24.077165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:74384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.082 [2024-04-17 16:27:24.077179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.082 [2024-04-17 16:27:24.077194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.082 [2024-04-17 16:27:24.077208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.082 [2024-04-17 16:27:24.077223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:74400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.082 [2024-04-17 16:27:24.077237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.082 [2024-04-17 16:27:24.077252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.082 [2024-04-17 16:27:24.077265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.082 [2024-04-17 16:27:24.077281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.082 [2024-04-17 16:27:24.077294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.082 [2024-04-17 16:27:24.077310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.082 [2024-04-17 16:27:24.077323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.082 [2024-04-17 16:27:24.077338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.082 [2024-04-17 16:27:24.077353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.082 [2024-04-17 16:27:24.077368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.082 [2024-04-17 16:27:24.077381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.082 [2024-04-17 16:27:24.077397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.082 [2024-04-17 16:27:24.077419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.082 [2024-04-17 16:27:24.077435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.082 [2024-04-17 16:27:24.077449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.082 [2024-04-17 16:27:24.077464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.082 [2024-04-17 16:27:24.077479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.082 [2024-04-17 16:27:24.077494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.082 [2024-04-17 16:27:24.077508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.082 [2024-04-17 16:27:24.077523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:73784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.082 [2024-04-17 16:27:24.077537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.082 [2024-04-17 16:27:24.077552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:73792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.082 [2024-04-17 16:27:24.077566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.082 [2024-04-17 16:27:24.077581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:73800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.082 [2024-04-17 16:27:24.077596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.082 [2024-04-17 16:27:24.077611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:73808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.082 [2024-04-17 16:27:24.077625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.082 [2024-04-17 16:27:24.077641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:73816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.083 [2024-04-17 16:27:24.077655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.083 [2024-04-17 16:27:24.077670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:73824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.083 [2024-04-17 16:27:24.077684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.083 [2024-04-17 16:27:24.077700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:73832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.083 [2024-04-17 16:27:24.077713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.083 [2024-04-17 16:27:24.077729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.083 [2024-04-17 16:27:24.077742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.083 [2024-04-17 16:27:24.077758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.083 [2024-04-17 16:27:24.077790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.083 [2024-04-17 16:27:24.077816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.083 [2024-04-17 16:27:24.077831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.083 [2024-04-17 16:27:24.077856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.083 [2024-04-17 16:27:24.077873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.083 [2024-04-17 16:27:24.077889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.083 [2024-04-17 16:27:24.077903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.083 [2024-04-17 16:27:24.077919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.083 [2024-04-17 16:27:24.077933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.083 [2024-04-17 16:27:24.077949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.083 [2024-04-17 16:27:24.077963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.083 [2024-04-17 16:27:24.077985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.083 [2024-04-17 16:27:24.078008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.083 [2024-04-17 16:27:24.078023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.083 [2024-04-17 16:27:24.078038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.083 [2024-04-17 16:27:24.078054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.083 [2024-04-17 16:27:24.078067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.083 [2024-04-17 16:27:24.078083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.083 [2024-04-17 16:27:24.078096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.083 [2024-04-17 16:27:24.078112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.083 [2024-04-17 16:27:24.078126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.083 [2024-04-17 16:27:24.078141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.083 [2024-04-17 16:27:24.078156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.083 [2024-04-17 16:27:24.078171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.083 [2024-04-17 16:27:24.078185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.083 [2024-04-17 16:27:24.078201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.083 [2024-04-17 16:27:24.078214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.083 [2024-04-17 16:27:24.078237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.083 [2024-04-17 16:27:24.078252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.083 [2024-04-17 16:27:24.078267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.083 [2024-04-17 16:27:24.078282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.083 [2024-04-17 16:27:24.078297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.083 [2024-04-17 16:27:24.078317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.083 [2024-04-17 16:27:24.078332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.083 [2024-04-17 16:27:24.078346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.083 [2024-04-17 16:27:24.078362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.083 [2024-04-17 16:27:24.078376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.083 [2024-04-17 16:27:24.078391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.083 [2024-04-17 16:27:24.078405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.083 [2024-04-17 16:27:24.078420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.083 [2024-04-17 16:27:24.078434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.083 [2024-04-17 16:27:24.078449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.083 [2024-04-17 16:27:24.078463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.083 [2024-04-17 16:27:24.078483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.083 [2024-04-17 16:27:24.078497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.083 [2024-04-17 16:27:24.078512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.083 [2024-04-17 16:27:24.078527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.083 [2024-04-17 16:27:24.078542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:74680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.083 [2024-04-17 16:27:24.078556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.083 [2024-04-17 16:27:24.078571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.083 [2024-04-17 16:27:24.078585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.083 [2024-04-17 16:27:24.078600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.083 [2024-04-17 16:27:24.078621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.083 [2024-04-17 16:27:24.078637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.083 [2024-04-17 16:27:24.078651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.083 [2024-04-17 16:27:24.078666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.083 [2024-04-17 16:27:24.078680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.083 [2024-04-17 16:27:24.078696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.083 [2024-04-17 16:27:24.078710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.083 [2024-04-17 16:27:24.078725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.083 [2024-04-17 16:27:24.078739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.083 [2024-04-17 16:27:24.078786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:05.083 [2024-04-17 16:27:24.078804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73840 len:8 PRP1 0x0 PRP2 0x0 00:16:05.083 [2024-04-17 16:27:24.078817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.083 [2024-04-17 16:27:24.078836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:05.083 [2024-04-17 16:27:24.078848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:05.083 [2024-04-17 16:27:24.078859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73848 len:8 PRP1 0x0 PRP2 0x0 00:16:05.083 [2024-04-17 16:27:24.078872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.083 [2024-04-17 16:27:24.078886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:05.083 [2024-04-17 16:27:24.078896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:05.083 [2024-04-17 16:27:24.078907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73856 len:8 PRP1 0x0 PRP2 0x0 00:16:05.083 [2024-04-17 16:27:24.078921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.084 [2024-04-17 16:27:24.078934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:05.084 [2024-04-17 16:27:24.078945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:05.084 [2024-04-17 16:27:24.078956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73864 len:8 PRP1 0x0 PRP2 0x0 00:16:05.084 [2024-04-17 16:27:24.078974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.084 [2024-04-17 16:27:24.078989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:05.084 [2024-04-17 16:27:24.078999] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:05.084 [2024-04-17 16:27:24.079009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73872 len:8 PRP1 0x0 PRP2 0x0 00:16:05.084 [2024-04-17 16:27:24.079022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.084 [2024-04-17 16:27:24.079036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:05.084 [2024-04-17 16:27:24.079050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:05.084 [2024-04-17 16:27:24.079066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73880 len:8 PRP1 0x0 PRP2 0x0 00:16:05.084 [2024-04-17 16:27:24.079080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.084 [2024-04-17 16:27:24.079093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:05.084 [2024-04-17 16:27:24.079103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:05.084 [2024-04-17 16:27:24.079114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73888 len:8 PRP1 0x0 PRP2 0x0 00:16:05.084 [2024-04-17 16:27:24.079127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.084 [2024-04-17 16:27:24.079140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:05.084 [2024-04-17 16:27:24.079150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:05.084 [2024-04-17 16:27:24.079161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73896 len:8 PRP1 0x0 PRP2 0x0 00:16:05.084 [2024-04-17 16:27:24.079174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.084 [2024-04-17 16:27:24.079187] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:05.084 [2024-04-17 16:27:24.079197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:05.084 [2024-04-17 16:27:24.079208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73904 len:8 PRP1 0x0 PRP2 0x0 00:16:05.084 [2024-04-17 16:27:24.079221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.084 [2024-04-17 16:27:24.079234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:05.084 [2024-04-17 16:27:24.079244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:05.084 [2024-04-17 16:27:24.079254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73912 len:8 PRP1 0x0 PRP2 0x0 00:16:05.084 [2024-04-17 16:27:24.079267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.084 [2024-04-17 16:27:24.079280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:05.084 [2024-04-17 16:27:24.079291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:05.084 [2024-04-17 16:27:24.079302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73920 len:8 PRP1 0x0 PRP2 0x0 00:16:05.084 [2024-04-17 16:27:24.079315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.084 [2024-04-17 16:27:24.079328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:05.084 [2024-04-17 16:27:24.079338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:05.084 [2024-04-17 16:27:24.079349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73928 len:8 PRP1 0x0 PRP2 0x0 00:16:05.084 [2024-04-17 16:27:24.079367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.084 [2024-04-17 16:27:24.079381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:05.084 [2024-04-17 16:27:24.079391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:05.084 [2024-04-17 16:27:24.079402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73936 len:8 PRP1 0x0 PRP2 0x0 00:16:05.084 [2024-04-17 16:27:24.079414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.084 [2024-04-17 16:27:24.079437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:05.084 [2024-04-17 16:27:24.079448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:05.084 [2024-04-17 16:27:24.079459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73944 len:8 PRP1 0x0 PRP2 0x0 00:16:05.084 [2024-04-17 16:27:24.079472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.084 [2024-04-17 16:27:24.079485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:05.084 [2024-04-17 16:27:24.079501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:05.084 [2024-04-17 16:27:24.079512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73952 len:8 PRP1 0x0 PRP2 0x0 00:16:05.084 [2024-04-17 16:27:24.079525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.084 [2024-04-17 16:27:24.079539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:05.084 [2024-04-17 16:27:24.079549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:05.084 [2024-04-17 16:27:24.079559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73960 len:8 PRP1 0x0 PRP2 0x0 00:16:05.084 [2024-04-17 16:27:24.079572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.084 [2024-04-17 16:27:24.079586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:05.084 [2024-04-17 16:27:24.079596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:05.084 [2024-04-17 16:27:24.079607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73968 len:8 PRP1 0x0 PRP2 0x0 00:16:05.084 [2024-04-17 16:27:24.079620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.084 [2024-04-17 16:27:24.079633] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:05.084 [2024-04-17 16:27:24.079644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:05.084 [2024-04-17 16:27:24.079654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73976 len:8 PRP1 0x0 PRP2 0x0 00:16:05.084 [2024-04-17 16:27:24.079667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.084 [2024-04-17 16:27:24.079680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:05.084 [2024-04-17 16:27:24.079690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:05.084 [2024-04-17 16:27:24.079701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73984 len:8 PRP1 0x0 PRP2 0x0 00:16:05.084 [2024-04-17 16:27:24.079714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.084 [2024-04-17 16:27:24.079727] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:05.084 [2024-04-17 16:27:24.079737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:05.084 [2024-04-17 16:27:24.079748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73992 len:8 PRP1 0x0 PRP2 0x0 00:16:05.084 [2024-04-17 16:27:24.079761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.084 [2024-04-17 16:27:24.079789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:05.084 [2024-04-17 16:27:24.079801] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:05.084 [2024-04-17 16:27:24.079812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74000 len:8 PRP1 0x0 PRP2 0x0 00:16:05.084 [2024-04-17 16:27:24.079844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.084 [2024-04-17 16:27:24.079860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:05.084 [2024-04-17 16:27:24.079869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:05.084 [2024-04-17 16:27:24.079879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74008 len:8 PRP1 0x0 PRP2 0x0 00:16:05.084 [2024-04-17 16:27:24.079892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.084 [2024-04-17 16:27:24.079906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:05.084 [2024-04-17 16:27:24.079921] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:05.084 [2024-04-17 16:27:24.079931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74016 len:8 PRP1 0x0 PRP2 0x0 00:16:05.084 [2024-04-17 16:27:24.079944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.084 [2024-04-17 16:27:24.079957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:05.084 [2024-04-17 16:27:24.079967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:05.084 [2024-04-17 16:27:24.079977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74024 len:8 PRP1 0x0 PRP2 0x0 00:16:05.084 [2024-04-17 16:27:24.079990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.084 [2024-04-17 16:27:24.080052] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x55e3b0 was disconnected and freed. reset controller. 00:16:05.084 [2024-04-17 16:27:24.080070] bdev_nvme.c:1853:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:16:05.084 [2024-04-17 16:27:24.080126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.084 [2024-04-17 16:27:24.080147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.084 [2024-04-17 16:27:24.080162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.084 [2024-04-17 16:27:24.080175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.084 [2024-04-17 16:27:24.080189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.085 [2024-04-17 16:27:24.080203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.085 [2024-04-17 16:27:24.080217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.085 [2024-04-17 16:27:24.080230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.085 [2024-04-17 16:27:24.080244] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:05.085 [2024-04-17 16:27:24.080297] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4f6740 (9): Bad file descriptor 00:16:05.085 [2024-04-17 16:27:24.084130] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:05.085 [2024-04-17 16:27:24.120182] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:05.085 [2024-04-17 16:27:27.752318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.085 [2024-04-17 16:27:27.752413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.085 [2024-04-17 16:27:27.752452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.085 [2024-04-17 16:27:27.752465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.085 [2024-04-17 16:27:27.752479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.085 [2024-04-17 16:27:27.752492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.085 [2024-04-17 16:27:27.752511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.085 [2024-04-17 16:27:27.752524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.085 [2024-04-17 16:27:27.752537] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4f6740 is same with the state(5) to be set 00:16:05.085 [2024-04-17 16:27:27.756375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:39880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.085 [2024-04-17 16:27:27.756410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.085 [2024-04-17 16:27:27.756437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:40144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.085 [2024-04-17 16:27:27.756453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.085 [2024-04-17 16:27:27.756469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:40152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.085 [2024-04-17 16:27:27.756482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.085 [2024-04-17 16:27:27.756497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:40160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.085 [2024-04-17 16:27:27.756510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.085 [2024-04-17 16:27:27.756525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:40168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.085 [2024-04-17 16:27:27.756539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.085 [2024-04-17 16:27:27.756564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:40176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.085 [2024-04-17 16:27:27.756592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.085 [2024-04-17 16:27:27.756608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:40184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.085 [2024-04-17 16:27:27.756621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.085 [2024-04-17 16:27:27.756637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:40192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.085 [2024-04-17 16:27:27.756665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.085 [2024-04-17 16:27:27.756680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:40200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.085 [2024-04-17 16:27:27.756693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.085 [2024-04-17 16:27:27.756769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:40208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.085 [2024-04-17 16:27:27.756829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.085 [2024-04-17 16:27:27.756846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:40216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.085 [2024-04-17 16:27:27.756861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.085 [2024-04-17 16:27:27.756891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:40224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.085 [2024-04-17 16:27:27.756905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.085 [2024-04-17 16:27:27.756919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:40232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.085 [2024-04-17 16:27:27.756933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.085 [2024-04-17 16:27:27.756948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:40240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.085 [2024-04-17 16:27:27.756961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.085 [2024-04-17 16:27:27.756976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:40248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.085 [2024-04-17 16:27:27.756989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.085 [2024-04-17 16:27:27.757004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:40256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.085 [2024-04-17 16:27:27.757018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.085 [2024-04-17 16:27:27.757033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:40264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.085 [2024-04-17 16:27:27.757057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.085 [2024-04-17 16:27:27.757072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:40272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.085 [2024-04-17 16:27:27.757086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.085 [2024-04-17 16:27:27.757102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.085 [2024-04-17 16:27:27.757115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.085 [2024-04-17 16:27:27.757153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:40288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.085 [2024-04-17 16:27:27.757182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.085 [2024-04-17 16:27:27.757208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:40296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.085 [2024-04-17 16:27:27.757221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.085 [2024-04-17 16:27:27.757236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:40304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.086 [2024-04-17 16:27:27.757258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.086 [2024-04-17 16:27:27.757274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:40312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.086 [2024-04-17 16:27:27.757302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.086 [2024-04-17 16:27:27.757317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:40320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.086 [2024-04-17 16:27:27.757341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.086 [2024-04-17 16:27:27.757355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.086 [2024-04-17 16:27:27.757368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.086 [2024-04-17 16:27:27.757382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:40336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.086 [2024-04-17 16:27:27.757410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.086 [2024-04-17 16:27:27.757424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.086 [2024-04-17 16:27:27.757436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.086 [2024-04-17 16:27:27.757450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:40352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.086 [2024-04-17 16:27:27.757463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.086 [2024-04-17 16:27:27.757477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:40360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.086 [2024-04-17 16:27:27.757489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.086 [2024-04-17 16:27:27.757503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:40368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.086 [2024-04-17 16:27:27.757515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.086 [2024-04-17 16:27:27.757534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:40376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.086 [2024-04-17 16:27:27.757547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.086 [2024-04-17 16:27:27.757560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:40384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.086 [2024-04-17 16:27:27.757573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.086 [2024-04-17 16:27:27.757587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:40392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.086 [2024-04-17 16:27:27.757599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.086 [2024-04-17 16:27:27.757613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:40400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.086 [2024-04-17 16:27:27.757626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.086 [2024-04-17 16:27:27.757646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:40408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.086 [2024-04-17 16:27:27.757660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.086 [2024-04-17 16:27:27.757674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.086 [2024-04-17 16:27:27.757686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.086 [2024-04-17 16:27:27.757701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:40424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.086 [2024-04-17 16:27:27.757713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.086 [2024-04-17 16:27:27.757727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.086 [2024-04-17 16:27:27.757739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.086 [2024-04-17 16:27:27.757753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:40440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.086 [2024-04-17 16:27:27.757766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.086 [2024-04-17 16:27:27.757795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:40448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.086 [2024-04-17 16:27:27.757825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.086 [2024-04-17 16:27:27.757840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:40456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.086 [2024-04-17 16:27:27.757854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.086 [2024-04-17 16:27:27.757914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:40464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.086 [2024-04-17 16:27:27.757929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.086 [2024-04-17 16:27:27.757945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:40472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.086 [2024-04-17 16:27:27.757959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.086 [2024-04-17 16:27:27.757975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:40480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.086 [2024-04-17 16:27:27.757990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.086 [2024-04-17 16:27:27.758006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:40488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.086 [2024-04-17 16:27:27.758020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.086 [2024-04-17 16:27:27.758035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:40496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.086 [2024-04-17 16:27:27.758058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.086 [2024-04-17 16:27:27.758075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:40504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.086 [2024-04-17 16:27:27.758097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.086 [2024-04-17 16:27:27.758114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:40512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.086 [2024-04-17 16:27:27.758128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.086 [2024-04-17 16:27:27.758144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:40520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.086 [2024-04-17 16:27:27.758158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.086 [2024-04-17 16:27:27.758174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.086 [2024-04-17 16:27:27.758188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.086 [2024-04-17 16:27:27.758218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:40536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.086 [2024-04-17 16:27:27.758232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.086 [2024-04-17 16:27:27.758248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:40544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.086 [2024-04-17 16:27:27.758262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.086 [2024-04-17 16:27:27.758277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.086 [2024-04-17 16:27:27.758320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.086 [2024-04-17 16:27:27.758334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:40560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.086 [2024-04-17 16:27:27.758347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.086 [2024-04-17 16:27:27.758372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:40568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.086 [2024-04-17 16:27:27.758385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.086 [2024-04-17 16:27:27.758399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:40576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.086 [2024-04-17 16:27:27.758411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.086 [2024-04-17 16:27:27.758425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:40584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.086 [2024-04-17 16:27:27.758437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.086 [2024-04-17 16:27:27.758451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.086 [2024-04-17 16:27:27.758464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.086 [2024-04-17 16:27:27.758477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:40600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.086 [2024-04-17 16:27:27.758490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.086 [2024-04-17 16:27:27.758503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:40608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.086 [2024-04-17 16:27:27.758523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.086 [2024-04-17 16:27:27.758538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:40616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.086 [2024-04-17 16:27:27.758551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.087 [2024-04-17 16:27:27.758565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:40624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.087 [2024-04-17 16:27:27.758578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.087 [2024-04-17 16:27:27.758592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:40632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.087 [2024-04-17 16:27:27.758605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.087 [2024-04-17 16:27:27.758618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:40640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.087 [2024-04-17 16:27:27.758631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.087 [2024-04-17 16:27:27.758645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:40648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.087 [2024-04-17 16:27:27.758657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.087 [2024-04-17 16:27:27.758671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:40656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.087 [2024-04-17 16:27:27.758683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.087 [2024-04-17 16:27:27.758697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:40664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.087 [2024-04-17 16:27:27.758709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.087 [2024-04-17 16:27:27.758725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:40672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.087 [2024-04-17 16:27:27.758737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.087 [2024-04-17 16:27:27.758752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:40680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.087 [2024-04-17 16:27:27.758764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.087 [2024-04-17 16:27:27.758794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.087 [2024-04-17 16:27:27.758823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.087 [2024-04-17 16:27:27.758838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.087 [2024-04-17 16:27:27.758851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.087 [2024-04-17 16:27:27.758878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:40704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.087 [2024-04-17 16:27:27.758892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.087 [2024-04-17 16:27:27.758914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:40712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.087 [2024-04-17 16:27:27.758928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.087 [2024-04-17 16:27:27.758960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:40720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.087 [2024-04-17 16:27:27.758974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.087 [2024-04-17 16:27:27.758988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:40728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.087 [2024-04-17 16:27:27.759002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.087 [2024-04-17 16:27:27.759017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:40736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.087 [2024-04-17 16:27:27.759031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.087 [2024-04-17 16:27:27.759046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:40744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.087 [2024-04-17 16:27:27.759076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.087 [2024-04-17 16:27:27.759091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:40752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.087 [2024-04-17 16:27:27.759105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.087 [2024-04-17 16:27:27.759121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:40760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.087 [2024-04-17 16:27:27.759143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.087 [2024-04-17 16:27:27.759158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:40768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.087 [2024-04-17 16:27:27.759172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.087 [2024-04-17 16:27:27.759188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:40776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.087 [2024-04-17 16:27:27.759202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.087 [2024-04-17 16:27:27.759217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:40784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.087 [2024-04-17 16:27:27.759231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.087 [2024-04-17 16:27:27.759247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:40792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.087 [2024-04-17 16:27:27.759275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.087 [2024-04-17 16:27:27.759292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:40800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.087 [2024-04-17 16:27:27.759306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.087 [2024-04-17 16:27:27.759321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:40808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.087 [2024-04-17 16:27:27.759341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.087 [2024-04-17 16:27:27.759357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:40816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.087 [2024-04-17 16:27:27.759386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.087 [2024-04-17 16:27:27.759400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:40824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.087 [2024-04-17 16:27:27.759414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.087 [2024-04-17 16:27:27.759428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:40832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.087 [2024-04-17 16:27:27.759441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.087 [2024-04-17 16:27:27.759456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:40840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.087 [2024-04-17 16:27:27.759469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.087 [2024-04-17 16:27:27.759500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:40848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.087 [2024-04-17 16:27:27.759513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.087 [2024-04-17 16:27:27.759528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:40856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.087 [2024-04-17 16:27:27.759543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.087 [2024-04-17 16:27:27.759575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.087 [2024-04-17 16:27:27.759589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.087 [2024-04-17 16:27:27.759604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.087 [2024-04-17 16:27:27.759618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.087 [2024-04-17 16:27:27.759634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:40880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.087 [2024-04-17 16:27:27.759648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.087 [2024-04-17 16:27:27.759663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:40888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.087 [2024-04-17 16:27:27.759678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.087 [2024-04-17 16:27:27.759694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:40896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.087 [2024-04-17 16:27:27.759708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.087 [2024-04-17 16:27:27.759724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:39888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.087 [2024-04-17 16:27:27.759738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.087 [2024-04-17 16:27:27.759754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:39896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.087 [2024-04-17 16:27:27.759778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.087 [2024-04-17 16:27:27.759795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:39904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.087 [2024-04-17 16:27:27.759809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.087 [2024-04-17 16:27:27.759834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:39912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.087 [2024-04-17 16:27:27.759848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.087 [2024-04-17 16:27:27.759873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:39920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.087 [2024-04-17 16:27:27.759889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.087 [2024-04-17 16:27:27.759905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:39928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.088 [2024-04-17 16:27:27.759924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.088 [2024-04-17 16:27:27.759940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:39936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.088 [2024-04-17 16:27:27.759954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.088 [2024-04-17 16:27:27.759970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:39944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.088 [2024-04-17 16:27:27.759984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.088 [2024-04-17 16:27:27.759999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:39952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.088 [2024-04-17 16:27:27.760014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.088 [2024-04-17 16:27:27.760030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:39960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.088 [2024-04-17 16:27:27.760044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.088 [2024-04-17 16:27:27.760060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:39968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.088 [2024-04-17 16:27:27.760073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.088 [2024-04-17 16:27:27.760089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:39976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.088 [2024-04-17 16:27:27.760104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.088 [2024-04-17 16:27:27.760119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:39984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.088 [2024-04-17 16:27:27.760133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.088 [2024-04-17 16:27:27.760149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:39992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.088 [2024-04-17 16:27:27.760163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.088 [2024-04-17 16:27:27.760238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:40000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.088 [2024-04-17 16:27:27.760255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.088 [2024-04-17 16:27:27.760271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:40008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.088 [2024-04-17 16:27:27.760285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.088 [2024-04-17 16:27:27.760301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:40016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.088 [2024-04-17 16:27:27.760315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.088 [2024-04-17 16:27:27.760331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:40024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.088 [2024-04-17 16:27:27.760344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.088 [2024-04-17 16:27:27.760360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.088 [2024-04-17 16:27:27.760374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.088 [2024-04-17 16:27:27.760390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:40040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.088 [2024-04-17 16:27:27.760404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.088 [2024-04-17 16:27:27.760445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:40048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.088 [2024-04-17 16:27:27.760459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.088 [2024-04-17 16:27:27.760474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:40056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.088 [2024-04-17 16:27:27.760487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.088 [2024-04-17 16:27:27.760503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:40064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.088 [2024-04-17 16:27:27.760516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.088 [2024-04-17 16:27:27.760531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:40072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.088 [2024-04-17 16:27:27.760545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.088 [2024-04-17 16:27:27.760560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:40080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.088 [2024-04-17 16:27:27.760573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.088 [2024-04-17 16:27:27.760588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:40088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.088 [2024-04-17 16:27:27.760602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.088 [2024-04-17 16:27:27.760617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:40096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.088 [2024-04-17 16:27:27.760638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.088 [2024-04-17 16:27:27.760654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:40104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.088 [2024-04-17 16:27:27.760667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.088 [2024-04-17 16:27:27.760682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:40112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.088 [2024-04-17 16:27:27.760696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.088 [2024-04-17 16:27:27.760711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:40120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.088 [2024-04-17 16:27:27.760725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.088 [2024-04-17 16:27:27.760740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.088 [2024-04-17 16:27:27.760754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.088 [2024-04-17 16:27:27.760767] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4f6df0 is same with the state(5) to be set 00:16:05.088 [2024-04-17 16:27:27.760810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:05.088 [2024-04-17 16:27:27.760823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:05.088 [2024-04-17 16:27:27.760834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40136 len:8 PRP1 0x0 PRP2 0x0 00:16:05.088 [2024-04-17 16:27:27.760847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.088 [2024-04-17 16:27:27.760915] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x4f6df0 was disconnected and freed. reset controller. 00:16:05.088 [2024-04-17 16:27:27.760933] bdev_nvme.c:1853:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:16:05.088 [2024-04-17 16:27:27.760947] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:05.088 [2024-04-17 16:27:27.764989] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:05.088 [2024-04-17 16:27:27.765025] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4f6740 (9): Bad file descriptor 00:16:05.088 [2024-04-17 16:27:27.792954] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:05.088 [2024-04-17 16:27:32.286903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.088 [2024-04-17 16:27:32.286949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.088 [2024-04-17 16:27:32.286967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.088 [2024-04-17 16:27:32.286982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.088 [2024-04-17 16:27:32.286996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.088 [2024-04-17 16:27:32.287010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.088 [2024-04-17 16:27:32.287024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.088 [2024-04-17 16:27:32.287072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.088 [2024-04-17 16:27:32.287096] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4f6740 is same with the state(5) to be set 00:16:05.088 [2024-04-17 16:27:32.287156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.088 [2024-04-17 16:27:32.287179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.088 [2024-04-17 16:27:32.287213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.088 [2024-04-17 16:27:32.287235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.088 [2024-04-17 16:27:32.287255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.088 [2024-04-17 16:27:32.287284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.088 [2024-04-17 16:27:32.287315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.088 [2024-04-17 16:27:32.287328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.088 [2024-04-17 16:27:32.287343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:96144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.088 [2024-04-17 16:27:32.287356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.088 [2024-04-17 16:27:32.287371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.089 [2024-04-17 16:27:32.287383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.089 [2024-04-17 16:27:32.287398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:96160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.089 [2024-04-17 16:27:32.287411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.089 [2024-04-17 16:27:32.287426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.089 [2024-04-17 16:27:32.287439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.089 [2024-04-17 16:27:32.287453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:96176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.089 [2024-04-17 16:27:32.287466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.089 [2024-04-17 16:27:32.287480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:96184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.089 [2024-04-17 16:27:32.287494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.089 [2024-04-17 16:27:32.287508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.089 [2024-04-17 16:27:32.287521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.089 [2024-04-17 16:27:32.287536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.089 [2024-04-17 16:27:32.287559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.089 [2024-04-17 16:27:32.287577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.089 [2024-04-17 16:27:32.287590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.089 [2024-04-17 16:27:32.287605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.089 [2024-04-17 16:27:32.287619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.089 [2024-04-17 16:27:32.287633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.089 [2024-04-17 16:27:32.287646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.089 [2024-04-17 16:27:32.287661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.089 [2024-04-17 16:27:32.287674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.089 [2024-04-17 16:27:32.287689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.089 [2024-04-17 16:27:32.287702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.089 [2024-04-17 16:27:32.287716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.089 [2024-04-17 16:27:32.287746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.089 [2024-04-17 16:27:32.287763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.089 [2024-04-17 16:27:32.287776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.089 [2024-04-17 16:27:32.287808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.089 [2024-04-17 16:27:32.287822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.089 [2024-04-17 16:27:32.287853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.089 [2024-04-17 16:27:32.287874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.089 [2024-04-17 16:27:32.287889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.089 [2024-04-17 16:27:32.287903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.089 [2024-04-17 16:27:32.287927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.089 [2024-04-17 16:27:32.287941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.089 [2024-04-17 16:27:32.287957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.089 [2024-04-17 16:27:32.287971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.089 [2024-04-17 16:27:32.287986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.089 [2024-04-17 16:27:32.288009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.089 [2024-04-17 16:27:32.288026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.089 [2024-04-17 16:27:32.288040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.089 [2024-04-17 16:27:32.288056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.089 [2024-04-17 16:27:32.288070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.089 [2024-04-17 16:27:32.288091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.089 [2024-04-17 16:27:32.288106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.089 [2024-04-17 16:27:32.288122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.089 [2024-04-17 16:27:32.288136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.089 [2024-04-17 16:27:32.288151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.089 [2024-04-17 16:27:32.288165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.089 [2024-04-17 16:27:32.288181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.089 [2024-04-17 16:27:32.288195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.089 [2024-04-17 16:27:32.288211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.089 [2024-04-17 16:27:32.288224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.089 [2024-04-17 16:27:32.288241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.089 [2024-04-17 16:27:32.288255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.089 [2024-04-17 16:27:32.288302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.089 [2024-04-17 16:27:32.288316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.089 [2024-04-17 16:27:32.288332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.089 [2024-04-17 16:27:32.288346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.089 [2024-04-17 16:27:32.288362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.089 [2024-04-17 16:27:32.288376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.089 [2024-04-17 16:27:32.288392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.089 [2024-04-17 16:27:32.288405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.089 [2024-04-17 16:27:32.288427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.089 [2024-04-17 16:27:32.288442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.089 [2024-04-17 16:27:32.288458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.089 [2024-04-17 16:27:32.288472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.089 [2024-04-17 16:27:32.288487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.089 [2024-04-17 16:27:32.288502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.089 [2024-04-17 16:27:32.288517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.089 [2024-04-17 16:27:32.288531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.089 [2024-04-17 16:27:32.288546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.089 [2024-04-17 16:27:32.288560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.089 [2024-04-17 16:27:32.288576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.089 [2024-04-17 16:27:32.288590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.089 [2024-04-17 16:27:32.288605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.089 [2024-04-17 16:27:32.288621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.089 [2024-04-17 16:27:32.288638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.089 [2024-04-17 16:27:32.288652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.089 [2024-04-17 16:27:32.288667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.090 [2024-04-17 16:27:32.288681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.090 [2024-04-17 16:27:32.288697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.090 [2024-04-17 16:27:32.288711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.090 [2024-04-17 16:27:32.288726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.090 [2024-04-17 16:27:32.288740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.090 [2024-04-17 16:27:32.288755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.090 [2024-04-17 16:27:32.288770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.090 [2024-04-17 16:27:32.288785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.090 [2024-04-17 16:27:32.288820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.090 [2024-04-17 16:27:32.288838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.090 [2024-04-17 16:27:32.288852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.090 [2024-04-17 16:27:32.288868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.090 [2024-04-17 16:27:32.288882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.090 [2024-04-17 16:27:32.288897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.090 [2024-04-17 16:27:32.288911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.090 [2024-04-17 16:27:32.288927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.090 [2024-04-17 16:27:32.288941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.090 [2024-04-17 16:27:32.288957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.090 [2024-04-17 16:27:32.288971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.090 [2024-04-17 16:27:32.288986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.090 [2024-04-17 16:27:32.289000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.090 [2024-04-17 16:27:32.289016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.090 [2024-04-17 16:27:32.289030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.090 [2024-04-17 16:27:32.289045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:96568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.090 [2024-04-17 16:27:32.289059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.090 [2024-04-17 16:27:32.289076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.090 [2024-04-17 16:27:32.289092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.090 [2024-04-17 16:27:32.289108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.090 [2024-04-17 16:27:32.289123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.090 [2024-04-17 16:27:32.289141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:96592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.090 [2024-04-17 16:27:32.289155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.090 [2024-04-17 16:27:32.289171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.090 [2024-04-17 16:27:32.289185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.090 [2024-04-17 16:27:32.289208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.090 [2024-04-17 16:27:32.289223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.090 [2024-04-17 16:27:32.289238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.090 [2024-04-17 16:27:32.289252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.090 [2024-04-17 16:27:32.289267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:96624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.090 [2024-04-17 16:27:32.289281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.090 [2024-04-17 16:27:32.289297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:96632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.090 [2024-04-17 16:27:32.289311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.090 [2024-04-17 16:27:32.289326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.090 [2024-04-17 16:27:32.289340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.090 [2024-04-17 16:27:32.289355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.090 [2024-04-17 16:27:32.289369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.090 [2024-04-17 16:27:32.289385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.090 [2024-04-17 16:27:32.289399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.090 [2024-04-17 16:27:32.289414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.090 [2024-04-17 16:27:32.289428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.090 [2024-04-17 16:27:32.289444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.090 [2024-04-17 16:27:32.289458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.090 [2024-04-17 16:27:32.289473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:96680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.090 [2024-04-17 16:27:32.289486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.090 [2024-04-17 16:27:32.289502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.090 [2024-04-17 16:27:32.289516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.090 [2024-04-17 16:27:32.289532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.090 [2024-04-17 16:27:32.289546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.090 [2024-04-17 16:27:32.289561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.090 [2024-04-17 16:27:32.289582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.090 [2024-04-17 16:27:32.289599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.090 [2024-04-17 16:27:32.289613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.090 [2024-04-17 16:27:32.289629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.090 [2024-04-17 16:27:32.289643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.090 [2024-04-17 16:27:32.289658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.090 [2024-04-17 16:27:32.289673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.091 [2024-04-17 16:27:32.289688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.091 [2024-04-17 16:27:32.289702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.091 [2024-04-17 16:27:32.289717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.091 [2024-04-17 16:27:32.289739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.091 [2024-04-17 16:27:32.289760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.091 [2024-04-17 16:27:32.289787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.091 [2024-04-17 16:27:32.289806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.091 [2024-04-17 16:27:32.289820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.091 [2024-04-17 16:27:32.289835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.091 [2024-04-17 16:27:32.289849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.091 [2024-04-17 16:27:32.289864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.091 [2024-04-17 16:27:32.289899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.091 [2024-04-17 16:27:32.289917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.091 [2024-04-17 16:27:32.289931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.091 [2024-04-17 16:27:32.289956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.091 [2024-04-17 16:27:32.289971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.091 [2024-04-17 16:27:32.289986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.091 [2024-04-17 16:27:32.290000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.091 [2024-04-17 16:27:32.290016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.091 [2024-04-17 16:27:32.290039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.091 [2024-04-17 16:27:32.290056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.091 [2024-04-17 16:27:32.290070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.091 [2024-04-17 16:27:32.290086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.091 [2024-04-17 16:27:32.290100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.091 [2024-04-17 16:27:32.290115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.091 [2024-04-17 16:27:32.290130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.091 [2024-04-17 16:27:32.290145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.091 [2024-04-17 16:27:32.290159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.091 [2024-04-17 16:27:32.290174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.091 [2024-04-17 16:27:32.290188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.091 [2024-04-17 16:27:32.290204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.091 [2024-04-17 16:27:32.290221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.091 [2024-04-17 16:27:32.290237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.091 [2024-04-17 16:27:32.290251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.091 [2024-04-17 16:27:32.290273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.091 [2024-04-17 16:27:32.290287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.091 [2024-04-17 16:27:32.290302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.091 [2024-04-17 16:27:32.290316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.091 [2024-04-17 16:27:32.290331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.091 [2024-04-17 16:27:32.290345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.091 [2024-04-17 16:27:32.290360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.091 [2024-04-17 16:27:32.290374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.091 [2024-04-17 16:27:32.290389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.091 [2024-04-17 16:27:32.290404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.091 [2024-04-17 16:27:32.290427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.091 [2024-04-17 16:27:32.290442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.091 [2024-04-17 16:27:32.290462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.091 [2024-04-17 16:27:32.290476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.091 [2024-04-17 16:27:32.290492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.091 [2024-04-17 16:27:32.290506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.091 [2024-04-17 16:27:32.290522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.091 [2024-04-17 16:27:32.290536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.091 [2024-04-17 16:27:32.290552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.091 [2024-04-17 16:27:32.290566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.091 [2024-04-17 16:27:32.290581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.091 [2024-04-17 16:27:32.290595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.091 [2024-04-17 16:27:32.290610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.091 [2024-04-17 16:27:32.290624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.091 [2024-04-17 16:27:32.290640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.091 [2024-04-17 16:27:32.290654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.091 [2024-04-17 16:27:32.290669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.091 [2024-04-17 16:27:32.290683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.091 [2024-04-17 16:27:32.290699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.091 [2024-04-17 16:27:32.290713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.091 [2024-04-17 16:27:32.290728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.091 [2024-04-17 16:27:32.290742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.091 [2024-04-17 16:27:32.290763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.091 [2024-04-17 16:27:32.290790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.091 [2024-04-17 16:27:32.290807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.091 [2024-04-17 16:27:32.290829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.091 [2024-04-17 16:27:32.290845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.091 [2024-04-17 16:27:32.290859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.091 [2024-04-17 16:27:32.290875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.091 [2024-04-17 16:27:32.290889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.091 [2024-04-17 16:27:32.290904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.091 [2024-04-17 16:27:32.290918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.091 [2024-04-17 16:27:32.290934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.091 [2024-04-17 16:27:32.290951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.091 [2024-04-17 16:27:32.290967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.092 [2024-04-17 16:27:32.290981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.092 [2024-04-17 16:27:32.290996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.092 [2024-04-17 16:27:32.291016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.092 [2024-04-17 16:27:32.291032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.092 [2024-04-17 16:27:32.291046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.092 [2024-04-17 16:27:32.291062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.092 [2024-04-17 16:27:32.291076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.092 [2024-04-17 16:27:32.291091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.092 [2024-04-17 16:27:32.291112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.092 [2024-04-17 16:27:32.291127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:96712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.092 [2024-04-17 16:27:32.291147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.092 [2024-04-17 16:27:32.291162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.092 [2024-04-17 16:27:32.291177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.092 [2024-04-17 16:27:32.291192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.092 [2024-04-17 16:27:32.291206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.092 [2024-04-17 16:27:32.291228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.092 [2024-04-17 16:27:32.291244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.092 [2024-04-17 16:27:32.291259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.092 [2024-04-17 16:27:32.291273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.092 [2024-04-17 16:27:32.291287] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56d420 is same with the state(5) to be set 00:16:05.092 [2024-04-17 16:27:32.291303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:05.092 [2024-04-17 16:27:32.291314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:05.092 [2024-04-17 16:27:32.291325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96752 len:8 PRP1 0x0 PRP2 0x0 00:16:05.092 [2024-04-17 16:27:32.291338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.092 [2024-04-17 16:27:32.291395] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x56d420 was disconnected and freed. reset controller. 00:16:05.092 [2024-04-17 16:27:32.291412] bdev_nvme.c:1853:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:16:05.092 [2024-04-17 16:27:32.291426] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:05.092 [2024-04-17 16:27:32.295236] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:05.092 [2024-04-17 16:27:32.295275] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4f6740 (9): Bad file descriptor 00:16:05.092 [2024-04-17 16:27:32.326871] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:05.092 00:16:05.092 Latency(us) 00:16:05.092 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:05.092 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:05.092 Verification LBA range: start 0x0 length 0x4000 00:16:05.092 NVMe0n1 : 15.01 8262.49 32.28 200.26 0.00 15092.69 670.25 19899.11 00:16:05.092 =================================================================================================================== 00:16:05.092 Total : 8262.49 32.28 200.26 0.00 15092.69 670.25 19899.11 00:16:05.092 Received shutdown signal, test time was about 15.000000 seconds 00:16:05.092 00:16:05.092 Latency(us) 00:16:05.092 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:05.092 =================================================================================================================== 00:16:05.092 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:05.092 16:27:38 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:16:05.092 16:27:38 -- host/failover.sh@65 -- # count=3 00:16:05.092 16:27:38 -- host/failover.sh@67 -- # (( count != 3 )) 00:16:05.092 16:27:38 -- host/failover.sh@73 -- # bdevperf_pid=81726 00:16:05.092 16:27:38 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:16:05.092 16:27:38 -- host/failover.sh@75 -- # waitforlisten 81726 /var/tmp/bdevperf.sock 00:16:05.092 16:27:38 -- common/autotest_common.sh@817 -- # '[' -z 81726 ']' 00:16:05.092 16:27:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:05.092 16:27:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:05.092 16:27:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:05.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:05.092 16:27:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:05.092 16:27:38 -- common/autotest_common.sh@10 -- # set +x 00:16:05.382 16:27:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:05.382 16:27:39 -- common/autotest_common.sh@850 -- # return 0 00:16:05.382 16:27:39 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:05.640 [2024-04-17 16:27:39.574687] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:05.640 16:27:39 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:16:05.899 [2024-04-17 16:27:39.806989] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:16:05.899 16:27:39 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:06.157 NVMe0n1 00:16:06.157 16:27:40 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:06.415 00:16:06.674 16:27:40 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:06.932 00:16:06.932 16:27:40 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:06.932 16:27:40 -- host/failover.sh@82 -- # grep -q NVMe0 00:16:07.191 16:27:41 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:07.450 16:27:41 -- host/failover.sh@87 -- # sleep 3 00:16:10.734 16:27:44 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:10.734 16:27:44 -- host/failover.sh@88 -- # grep -q NVMe0 00:16:10.734 16:27:44 -- host/failover.sh@90 -- # run_test_pid=81869 00:16:10.734 16:27:44 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:10.734 16:27:44 -- host/failover.sh@92 -- # wait 81869 00:16:11.668 0 00:16:11.668 16:27:45 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:11.668 [2024-04-17 16:27:38.303490] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:16:11.668 [2024-04-17 16:27:38.303600] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81726 ] 00:16:11.668 [2024-04-17 16:27:38.439124] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:11.668 [2024-04-17 16:27:38.554842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.668 [2024-04-17 16:27:41.247898] bdev_nvme.c:1853:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:16:11.668 [2024-04-17 16:27:41.248019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:11.668 [2024-04-17 16:27:41.248044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:11.668 [2024-04-17 16:27:41.248064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:11.668 [2024-04-17 16:27:41.248078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:11.668 [2024-04-17 16:27:41.248093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:11.668 [2024-04-17 16:27:41.248106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:11.668 [2024-04-17 16:27:41.248120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:11.668 [2024-04-17 16:27:41.248134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:11.668 [2024-04-17 16:27:41.248149] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:11.668 [2024-04-17 16:27:41.248200] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:11.668 [2024-04-17 16:27:41.248229] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf8740 (9): Bad file descriptor 00:16:11.668 [2024-04-17 16:27:41.251527] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:11.668 Running I/O for 1 seconds... 00:16:11.668 00:16:11.668 Latency(us) 00:16:11.668 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:11.668 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:11.668 Verification LBA range: start 0x0 length 0x4000 00:16:11.668 NVMe0n1 : 1.01 8629.57 33.71 0.00 0.00 14732.12 1645.85 15609.48 00:16:11.668 =================================================================================================================== 00:16:11.668 Total : 8629.57 33.71 0.00 0.00 14732.12 1645.85 15609.48 00:16:11.668 16:27:45 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:11.668 16:27:45 -- host/failover.sh@95 -- # grep -q NVMe0 00:16:11.926 16:27:45 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:12.493 16:27:46 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:12.493 16:27:46 -- host/failover.sh@99 -- # grep -q NVMe0 00:16:12.493 16:27:46 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:12.753 16:27:46 -- host/failover.sh@101 -- # sleep 3 00:16:16.150 16:27:49 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:16.150 16:27:49 -- host/failover.sh@103 -- # grep -q NVMe0 00:16:16.150 16:27:50 -- host/failover.sh@108 -- # killprocess 81726 00:16:16.150 16:27:50 -- common/autotest_common.sh@936 -- # '[' -z 81726 ']' 00:16:16.150 16:27:50 -- common/autotest_common.sh@940 -- # kill -0 81726 00:16:16.150 16:27:50 -- common/autotest_common.sh@941 -- # uname 00:16:16.150 16:27:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:16.150 16:27:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81726 00:16:16.150 killing process with pid 81726 00:16:16.150 16:27:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:16.150 16:27:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:16.150 16:27:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81726' 00:16:16.150 16:27:50 -- common/autotest_common.sh@955 -- # kill 81726 00:16:16.150 16:27:50 -- common/autotest_common.sh@960 -- # wait 81726 00:16:16.409 16:27:50 -- host/failover.sh@110 -- # sync 00:16:16.409 16:27:50 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:16.667 16:27:50 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:16:16.667 16:27:50 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:16.667 16:27:50 -- host/failover.sh@116 -- # nvmftestfini 00:16:16.667 16:27:50 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:16.667 16:27:50 -- nvmf/common.sh@117 -- # sync 00:16:16.667 16:27:50 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:16.667 16:27:50 -- nvmf/common.sh@120 -- # set +e 00:16:16.667 16:27:50 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:16.667 16:27:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:16.667 rmmod nvme_tcp 00:16:16.667 rmmod nvme_fabrics 00:16:16.667 rmmod nvme_keyring 00:16:16.667 16:27:50 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:16.667 16:27:50 -- nvmf/common.sh@124 -- # set -e 00:16:16.667 16:27:50 -- nvmf/common.sh@125 -- # return 0 00:16:16.667 16:27:50 -- nvmf/common.sh@478 -- # '[' -n 81360 ']' 00:16:16.667 16:27:50 -- nvmf/common.sh@479 -- # killprocess 81360 00:16:16.667 16:27:50 -- common/autotest_common.sh@936 -- # '[' -z 81360 ']' 00:16:16.667 16:27:50 -- common/autotest_common.sh@940 -- # kill -0 81360 00:16:16.667 16:27:50 -- common/autotest_common.sh@941 -- # uname 00:16:16.667 16:27:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:16.667 16:27:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81360 00:16:16.667 killing process with pid 81360 00:16:16.667 16:27:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:16.667 16:27:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:16.667 16:27:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81360' 00:16:16.667 16:27:50 -- common/autotest_common.sh@955 -- # kill 81360 00:16:16.667 16:27:50 -- common/autotest_common.sh@960 -- # wait 81360 00:16:16.926 16:27:50 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:16.926 16:27:50 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:16.926 16:27:50 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:16.926 16:27:50 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:16.926 16:27:50 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:16.926 16:27:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.926 16:27:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:16.926 16:27:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.189 16:27:50 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:17.189 00:16:17.189 real 0m33.468s 00:16:17.189 user 2m9.943s 00:16:17.189 sys 0m4.957s 00:16:17.189 ************************************ 00:16:17.189 END TEST nvmf_failover 00:16:17.189 ************************************ 00:16:17.189 16:27:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:17.189 16:27:50 -- common/autotest_common.sh@10 -- # set +x 00:16:17.189 16:27:51 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:17.189 16:27:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:17.189 16:27:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:17.189 16:27:51 -- common/autotest_common.sh@10 -- # set +x 00:16:17.189 ************************************ 00:16:17.189 START TEST nvmf_discovery 00:16:17.189 ************************************ 00:16:17.189 16:27:51 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:17.189 * Looking for test storage... 00:16:17.189 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:17.189 16:27:51 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:17.189 16:27:51 -- nvmf/common.sh@7 -- # uname -s 00:16:17.189 16:27:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:17.189 16:27:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:17.189 16:27:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:17.189 16:27:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:17.189 16:27:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:17.189 16:27:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:17.189 16:27:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:17.189 16:27:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:17.189 16:27:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:17.189 16:27:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:17.189 16:27:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:16:17.189 16:27:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:16:17.189 16:27:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:17.189 16:27:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:17.189 16:27:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:17.189 16:27:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:17.189 16:27:51 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:17.189 16:27:51 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:17.189 16:27:51 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:17.189 16:27:51 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:17.189 16:27:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.189 16:27:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.189 16:27:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.189 16:27:51 -- paths/export.sh@5 -- # export PATH 00:16:17.189 16:27:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.189 16:27:51 -- nvmf/common.sh@47 -- # : 0 00:16:17.189 16:27:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:17.189 16:27:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:17.189 16:27:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:17.189 16:27:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:17.189 16:27:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:17.189 16:27:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:17.189 16:27:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:17.189 16:27:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:17.189 16:27:51 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:16:17.189 16:27:51 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:16:17.189 16:27:51 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:16:17.189 16:27:51 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:16:17.189 16:27:51 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:16:17.189 16:27:51 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:16:17.189 16:27:51 -- host/discovery.sh@25 -- # nvmftestinit 00:16:17.189 16:27:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:17.189 16:27:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:17.189 16:27:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:17.189 16:27:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:17.189 16:27:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:17.189 16:27:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:17.189 16:27:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:17.189 16:27:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.189 16:27:51 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:16:17.453 16:27:51 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:16:17.453 16:27:51 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:16:17.453 16:27:51 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:16:17.453 16:27:51 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:16:17.453 16:27:51 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:16:17.453 16:27:51 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:17.453 16:27:51 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:17.453 16:27:51 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:17.453 16:27:51 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:17.453 16:27:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:17.453 16:27:51 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:17.453 16:27:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:17.453 16:27:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:17.453 16:27:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:17.453 16:27:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:17.453 16:27:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:17.453 16:27:51 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:17.453 16:27:51 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:17.453 16:27:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:17.453 Cannot find device "nvmf_tgt_br" 00:16:17.453 16:27:51 -- nvmf/common.sh@155 -- # true 00:16:17.453 16:27:51 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:17.453 Cannot find device "nvmf_tgt_br2" 00:16:17.453 16:27:51 -- nvmf/common.sh@156 -- # true 00:16:17.453 16:27:51 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:17.453 16:27:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:17.453 Cannot find device "nvmf_tgt_br" 00:16:17.453 16:27:51 -- nvmf/common.sh@158 -- # true 00:16:17.453 16:27:51 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:17.453 Cannot find device "nvmf_tgt_br2" 00:16:17.453 16:27:51 -- nvmf/common.sh@159 -- # true 00:16:17.453 16:27:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:17.453 16:27:51 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:17.453 16:27:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:17.453 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:17.453 16:27:51 -- nvmf/common.sh@162 -- # true 00:16:17.453 16:27:51 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:17.453 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:17.453 16:27:51 -- nvmf/common.sh@163 -- # true 00:16:17.453 16:27:51 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:17.453 16:27:51 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:17.453 16:27:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:17.453 16:27:51 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:17.453 16:27:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:17.453 16:27:51 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:17.453 16:27:51 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:17.453 16:27:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:17.453 16:27:51 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:17.453 16:27:51 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:17.453 16:27:51 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:17.453 16:27:51 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:17.453 16:27:51 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:17.453 16:27:51 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:17.453 16:27:51 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:17.453 16:27:51 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:17.453 16:27:51 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:17.453 16:27:51 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:17.453 16:27:51 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:17.453 16:27:51 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:17.712 16:27:51 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:17.712 16:27:51 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:17.712 16:27:51 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:17.712 16:27:51 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:17.712 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:17.712 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:16:17.712 00:16:17.712 --- 10.0.0.2 ping statistics --- 00:16:17.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.712 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:16:17.712 16:27:51 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:17.712 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:17.712 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:16:17.712 00:16:17.712 --- 10.0.0.3 ping statistics --- 00:16:17.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.712 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:16:17.712 16:27:51 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:17.712 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:17.712 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:16:17.712 00:16:17.712 --- 10.0.0.1 ping statistics --- 00:16:17.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.712 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:16:17.712 16:27:51 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:17.712 16:27:51 -- nvmf/common.sh@422 -- # return 0 00:16:17.712 16:27:51 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:17.712 16:27:51 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:17.712 16:27:51 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:17.712 16:27:51 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:17.712 16:27:51 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:17.712 16:27:51 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:17.712 16:27:51 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:17.712 16:27:51 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:16:17.712 16:27:51 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:17.712 16:27:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:17.712 16:27:51 -- common/autotest_common.sh@10 -- # set +x 00:16:17.712 16:27:51 -- nvmf/common.sh@470 -- # nvmfpid=82178 00:16:17.712 16:27:51 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:17.712 16:27:51 -- nvmf/common.sh@471 -- # waitforlisten 82178 00:16:17.712 16:27:51 -- common/autotest_common.sh@817 -- # '[' -z 82178 ']' 00:16:17.712 16:27:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.712 16:27:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:17.712 16:27:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.712 16:27:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:17.712 16:27:51 -- common/autotest_common.sh@10 -- # set +x 00:16:17.712 [2024-04-17 16:27:51.629538] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:16:17.712 [2024-04-17 16:27:51.629643] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:17.971 [2024-04-17 16:27:51.771507] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.972 [2024-04-17 16:27:51.877780] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:17.972 [2024-04-17 16:27:51.877837] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:17.972 [2024-04-17 16:27:51.877848] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:17.972 [2024-04-17 16:27:51.877856] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:17.972 [2024-04-17 16:27:51.877862] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:17.972 [2024-04-17 16:27:51.877885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:18.908 16:27:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:18.908 16:27:52 -- common/autotest_common.sh@850 -- # return 0 00:16:18.908 16:27:52 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:18.908 16:27:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:18.908 16:27:52 -- common/autotest_common.sh@10 -- # set +x 00:16:18.908 16:27:52 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:18.908 16:27:52 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:18.908 16:27:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:18.908 16:27:52 -- common/autotest_common.sh@10 -- # set +x 00:16:18.908 [2024-04-17 16:27:52.687224] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:18.908 16:27:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:18.908 16:27:52 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:16:18.908 16:27:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:18.908 16:27:52 -- common/autotest_common.sh@10 -- # set +x 00:16:18.908 [2024-04-17 16:27:52.695334] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:18.908 16:27:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:18.908 16:27:52 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:16:18.908 16:27:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:18.908 16:27:52 -- common/autotest_common.sh@10 -- # set +x 00:16:18.908 null0 00:16:18.908 16:27:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:18.908 16:27:52 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:16:18.908 16:27:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:18.908 16:27:52 -- common/autotest_common.sh@10 -- # set +x 00:16:18.908 null1 00:16:18.908 16:27:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:18.908 16:27:52 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:16:18.908 16:27:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:18.908 16:27:52 -- common/autotest_common.sh@10 -- # set +x 00:16:18.908 16:27:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:18.908 16:27:52 -- host/discovery.sh@45 -- # hostpid=82234 00:16:18.908 16:27:52 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:16:18.908 16:27:52 -- host/discovery.sh@46 -- # waitforlisten 82234 /tmp/host.sock 00:16:18.908 16:27:52 -- common/autotest_common.sh@817 -- # '[' -z 82234 ']' 00:16:18.908 16:27:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:16:18.908 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:18.908 16:27:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:18.908 16:27:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:18.908 16:27:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:18.908 16:27:52 -- common/autotest_common.sh@10 -- # set +x 00:16:18.908 [2024-04-17 16:27:52.785301] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:16:18.908 [2024-04-17 16:27:52.785396] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82234 ] 00:16:18.908 [2024-04-17 16:27:52.926403] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.167 [2024-04-17 16:27:53.053512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.103 16:27:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:20.103 16:27:53 -- common/autotest_common.sh@850 -- # return 0 00:16:20.103 16:27:53 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:20.103 16:27:53 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:16:20.103 16:27:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.103 16:27:53 -- common/autotest_common.sh@10 -- # set +x 00:16:20.103 16:27:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.103 16:27:53 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:16:20.103 16:27:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.103 16:27:53 -- common/autotest_common.sh@10 -- # set +x 00:16:20.103 16:27:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.103 16:27:53 -- host/discovery.sh@72 -- # notify_id=0 00:16:20.103 16:27:53 -- host/discovery.sh@83 -- # get_subsystem_names 00:16:20.103 16:27:53 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:20.103 16:27:53 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:20.103 16:27:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.103 16:27:53 -- common/autotest_common.sh@10 -- # set +x 00:16:20.103 16:27:53 -- host/discovery.sh@59 -- # sort 00:16:20.103 16:27:53 -- host/discovery.sh@59 -- # xargs 00:16:20.103 16:27:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.103 16:27:53 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:16:20.103 16:27:53 -- host/discovery.sh@84 -- # get_bdev_list 00:16:20.103 16:27:53 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:20.103 16:27:53 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:20.103 16:27:53 -- host/discovery.sh@55 -- # xargs 00:16:20.103 16:27:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.103 16:27:53 -- host/discovery.sh@55 -- # sort 00:16:20.103 16:27:53 -- common/autotest_common.sh@10 -- # set +x 00:16:20.103 16:27:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.103 16:27:53 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:16:20.103 16:27:53 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:16:20.103 16:27:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.103 16:27:53 -- common/autotest_common.sh@10 -- # set +x 00:16:20.103 16:27:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.103 16:27:53 -- host/discovery.sh@87 -- # get_subsystem_names 00:16:20.103 16:27:53 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:20.103 16:27:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.103 16:27:53 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:20.103 16:27:53 -- common/autotest_common.sh@10 -- # set +x 00:16:20.103 16:27:53 -- host/discovery.sh@59 -- # sort 00:16:20.103 16:27:53 -- host/discovery.sh@59 -- # xargs 00:16:20.103 16:27:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.103 16:27:53 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:16:20.103 16:27:53 -- host/discovery.sh@88 -- # get_bdev_list 00:16:20.103 16:27:54 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:20.103 16:27:54 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:20.103 16:27:54 -- host/discovery.sh@55 -- # xargs 00:16:20.103 16:27:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.103 16:27:54 -- host/discovery.sh@55 -- # sort 00:16:20.103 16:27:54 -- common/autotest_common.sh@10 -- # set +x 00:16:20.103 16:27:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.103 16:27:54 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:16:20.103 16:27:54 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:16:20.103 16:27:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.103 16:27:54 -- common/autotest_common.sh@10 -- # set +x 00:16:20.103 16:27:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.103 16:27:54 -- host/discovery.sh@91 -- # get_subsystem_names 00:16:20.103 16:27:54 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:20.103 16:27:54 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:20.103 16:27:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.103 16:27:54 -- common/autotest_common.sh@10 -- # set +x 00:16:20.103 16:27:54 -- host/discovery.sh@59 -- # sort 00:16:20.103 16:27:54 -- host/discovery.sh@59 -- # xargs 00:16:20.103 16:27:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.103 16:27:54 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:16:20.103 16:27:54 -- host/discovery.sh@92 -- # get_bdev_list 00:16:20.103 16:27:54 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:20.103 16:27:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.103 16:27:54 -- common/autotest_common.sh@10 -- # set +x 00:16:20.103 16:27:54 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:20.103 16:27:54 -- host/discovery.sh@55 -- # sort 00:16:20.103 16:27:54 -- host/discovery.sh@55 -- # xargs 00:16:20.103 16:27:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.377 16:27:54 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:16:20.377 16:27:54 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:20.377 16:27:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.377 16:27:54 -- common/autotest_common.sh@10 -- # set +x 00:16:20.377 [2024-04-17 16:27:54.183955] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:20.377 16:27:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.377 16:27:54 -- host/discovery.sh@97 -- # get_subsystem_names 00:16:20.377 16:27:54 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:20.377 16:27:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.377 16:27:54 -- common/autotest_common.sh@10 -- # set +x 00:16:20.377 16:27:54 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:20.377 16:27:54 -- host/discovery.sh@59 -- # sort 00:16:20.377 16:27:54 -- host/discovery.sh@59 -- # xargs 00:16:20.377 16:27:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.377 16:27:54 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:16:20.377 16:27:54 -- host/discovery.sh@98 -- # get_bdev_list 00:16:20.377 16:27:54 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:20.377 16:27:54 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:20.377 16:27:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.377 16:27:54 -- common/autotest_common.sh@10 -- # set +x 00:16:20.377 16:27:54 -- host/discovery.sh@55 -- # sort 00:16:20.377 16:27:54 -- host/discovery.sh@55 -- # xargs 00:16:20.377 16:27:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.377 16:27:54 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:16:20.377 16:27:54 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:16:20.377 16:27:54 -- host/discovery.sh@79 -- # expected_count=0 00:16:20.377 16:27:54 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:20.377 16:27:54 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:20.377 16:27:54 -- common/autotest_common.sh@901 -- # local max=10 00:16:20.377 16:27:54 -- common/autotest_common.sh@902 -- # (( max-- )) 00:16:20.377 16:27:54 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:20.377 16:27:54 -- common/autotest_common.sh@903 -- # get_notification_count 00:16:20.377 16:27:54 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:20.377 16:27:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.377 16:27:54 -- host/discovery.sh@74 -- # jq '. | length' 00:16:20.377 16:27:54 -- common/autotest_common.sh@10 -- # set +x 00:16:20.377 16:27:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.377 16:27:54 -- host/discovery.sh@74 -- # notification_count=0 00:16:20.377 16:27:54 -- host/discovery.sh@75 -- # notify_id=0 00:16:20.377 16:27:54 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:16:20.377 16:27:54 -- common/autotest_common.sh@904 -- # return 0 00:16:20.377 16:27:54 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:16:20.377 16:27:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.377 16:27:54 -- common/autotest_common.sh@10 -- # set +x 00:16:20.377 16:27:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.377 16:27:54 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:20.377 16:27:54 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:20.377 16:27:54 -- common/autotest_common.sh@901 -- # local max=10 00:16:20.377 16:27:54 -- common/autotest_common.sh@902 -- # (( max-- )) 00:16:20.377 16:27:54 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:20.377 16:27:54 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:16:20.377 16:27:54 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:20.377 16:27:54 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:20.377 16:27:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.377 16:27:54 -- common/autotest_common.sh@10 -- # set +x 00:16:20.377 16:27:54 -- host/discovery.sh@59 -- # sort 00:16:20.377 16:27:54 -- host/discovery.sh@59 -- # xargs 00:16:20.377 16:27:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.642 16:27:54 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:16:20.642 16:27:54 -- common/autotest_common.sh@906 -- # sleep 1 00:16:20.901 [2024-04-17 16:27:54.820302] bdev_nvme.c:6898:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:20.901 [2024-04-17 16:27:54.820347] bdev_nvme.c:6978:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:20.901 [2024-04-17 16:27:54.820383] bdev_nvme.c:6861:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:20.901 [2024-04-17 16:27:54.906708] bdev_nvme.c:6827:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:21.160 [2024-04-17 16:27:54.963062] bdev_nvme.c:6717:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:21.160 [2024-04-17 16:27:54.963129] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:21.727 16:27:55 -- common/autotest_common.sh@902 -- # (( max-- )) 00:16:21.727 16:27:55 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:21.727 16:27:55 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:16:21.727 16:27:55 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:21.727 16:27:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:21.727 16:27:55 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:21.727 16:27:55 -- common/autotest_common.sh@10 -- # set +x 00:16:21.727 16:27:55 -- host/discovery.sh@59 -- # sort 00:16:21.727 16:27:55 -- host/discovery.sh@59 -- # xargs 00:16:21.727 16:27:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:21.727 16:27:55 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.727 16:27:55 -- common/autotest_common.sh@904 -- # return 0 00:16:21.727 16:27:55 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:21.727 16:27:55 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:21.727 16:27:55 -- common/autotest_common.sh@901 -- # local max=10 00:16:21.727 16:27:55 -- common/autotest_common.sh@902 -- # (( max-- )) 00:16:21.727 16:27:55 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:16:21.727 16:27:55 -- common/autotest_common.sh@903 -- # get_bdev_list 00:16:21.727 16:27:55 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:21.727 16:27:55 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:21.727 16:27:55 -- host/discovery.sh@55 -- # xargs 00:16:21.727 16:27:55 -- host/discovery.sh@55 -- # sort 00:16:21.727 16:27:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:21.727 16:27:55 -- common/autotest_common.sh@10 -- # set +x 00:16:21.727 16:27:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:21.727 16:27:55 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:16:21.727 16:27:55 -- common/autotest_common.sh@904 -- # return 0 00:16:21.727 16:27:55 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:21.727 16:27:55 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:21.727 16:27:55 -- common/autotest_common.sh@901 -- # local max=10 00:16:21.727 16:27:55 -- common/autotest_common.sh@902 -- # (( max-- )) 00:16:21.727 16:27:55 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:16:21.727 16:27:55 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:16:21.727 16:27:55 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:21.727 16:27:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:21.727 16:27:55 -- common/autotest_common.sh@10 -- # set +x 00:16:21.727 16:27:55 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:21.727 16:27:55 -- host/discovery.sh@63 -- # sort -n 00:16:21.727 16:27:55 -- host/discovery.sh@63 -- # xargs 00:16:21.727 16:27:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:21.727 16:27:55 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:16:21.727 16:27:55 -- common/autotest_common.sh@904 -- # return 0 00:16:21.727 16:27:55 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:16:21.727 16:27:55 -- host/discovery.sh@79 -- # expected_count=1 00:16:21.727 16:27:55 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:21.727 16:27:55 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:21.727 16:27:55 -- common/autotest_common.sh@901 -- # local max=10 00:16:21.727 16:27:55 -- common/autotest_common.sh@902 -- # (( max-- )) 00:16:21.727 16:27:55 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:21.727 16:27:55 -- common/autotest_common.sh@903 -- # get_notification_count 00:16:21.727 16:27:55 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:21.727 16:27:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:21.727 16:27:55 -- common/autotest_common.sh@10 -- # set +x 00:16:21.727 16:27:55 -- host/discovery.sh@74 -- # jq '. | length' 00:16:21.727 16:27:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:21.727 16:27:55 -- host/discovery.sh@74 -- # notification_count=1 00:16:21.727 16:27:55 -- host/discovery.sh@75 -- # notify_id=1 00:16:21.727 16:27:55 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:16:21.727 16:27:55 -- common/autotest_common.sh@904 -- # return 0 00:16:21.727 16:27:55 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:16:21.727 16:27:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:21.727 16:27:55 -- common/autotest_common.sh@10 -- # set +x 00:16:21.727 16:27:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:21.727 16:27:55 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:21.727 16:27:55 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:21.727 16:27:55 -- common/autotest_common.sh@901 -- # local max=10 00:16:21.727 16:27:55 -- common/autotest_common.sh@902 -- # (( max-- )) 00:16:21.727 16:27:55 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:21.727 16:27:55 -- common/autotest_common.sh@903 -- # get_bdev_list 00:16:21.727 16:27:55 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:21.727 16:27:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:21.727 16:27:55 -- common/autotest_common.sh@10 -- # set +x 00:16:21.727 16:27:55 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:21.727 16:27:55 -- host/discovery.sh@55 -- # xargs 00:16:21.727 16:27:55 -- host/discovery.sh@55 -- # sort 00:16:21.987 16:27:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:21.987 16:27:55 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:21.987 16:27:55 -- common/autotest_common.sh@904 -- # return 0 00:16:21.987 16:27:55 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:16:21.987 16:27:55 -- host/discovery.sh@79 -- # expected_count=1 00:16:21.987 16:27:55 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:21.987 16:27:55 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:21.987 16:27:55 -- common/autotest_common.sh@901 -- # local max=10 00:16:21.987 16:27:55 -- common/autotest_common.sh@902 -- # (( max-- )) 00:16:21.987 16:27:55 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:21.987 16:27:55 -- common/autotest_common.sh@903 -- # get_notification_count 00:16:21.987 16:27:55 -- host/discovery.sh@74 -- # jq '. | length' 00:16:21.987 16:27:55 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:16:21.987 16:27:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:21.987 16:27:55 -- common/autotest_common.sh@10 -- # set +x 00:16:21.987 16:27:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:21.987 16:27:55 -- host/discovery.sh@74 -- # notification_count=1 00:16:21.987 16:27:55 -- host/discovery.sh@75 -- # notify_id=2 00:16:21.987 16:27:55 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:16:21.987 16:27:55 -- common/autotest_common.sh@904 -- # return 0 00:16:21.987 16:27:55 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:16:21.987 16:27:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:21.987 16:27:55 -- common/autotest_common.sh@10 -- # set +x 00:16:21.987 [2024-04-17 16:27:55.860776] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:21.987 [2024-04-17 16:27:55.861683] bdev_nvme.c:6880:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:16:21.987 [2024-04-17 16:27:55.861726] bdev_nvme.c:6861:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:21.987 16:27:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:21.987 16:27:55 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:21.987 16:27:55 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:21.987 16:27:55 -- common/autotest_common.sh@901 -- # local max=10 00:16:21.987 16:27:55 -- common/autotest_common.sh@902 -- # (( max-- )) 00:16:21.987 16:27:55 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:21.987 16:27:55 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:16:21.987 16:27:55 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:21.987 16:27:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:21.987 16:27:55 -- common/autotest_common.sh@10 -- # set +x 00:16:21.987 16:27:55 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:21.987 16:27:55 -- host/discovery.sh@59 -- # sort 00:16:21.987 16:27:55 -- host/discovery.sh@59 -- # xargs 00:16:21.987 16:27:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:21.987 16:27:55 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.987 16:27:55 -- common/autotest_common.sh@904 -- # return 0 00:16:21.987 16:27:55 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:21.987 16:27:55 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:21.987 16:27:55 -- common/autotest_common.sh@901 -- # local max=10 00:16:21.987 16:27:55 -- common/autotest_common.sh@902 -- # (( max-- )) 00:16:21.987 16:27:55 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:21.987 16:27:55 -- common/autotest_common.sh@903 -- # get_bdev_list 00:16:21.987 16:27:55 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:21.987 16:27:55 -- host/discovery.sh@55 -- # sort 00:16:21.987 16:27:55 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:21.987 16:27:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:21.987 16:27:55 -- common/autotest_common.sh@10 -- # set +x 00:16:21.987 16:27:55 -- host/discovery.sh@55 -- # xargs 00:16:21.987 [2024-04-17 16:27:55.947084] bdev_nvme.c:6822:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:16:21.987 16:27:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:21.987 16:27:55 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:21.987 16:27:55 -- common/autotest_common.sh@904 -- # return 0 00:16:21.987 16:27:55 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:21.987 16:27:55 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:21.987 16:27:55 -- common/autotest_common.sh@901 -- # local max=10 00:16:21.987 16:27:55 -- common/autotest_common.sh@902 -- # (( max-- )) 00:16:21.987 16:27:55 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:16:21.987 16:27:55 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:16:21.987 16:27:55 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:21.987 16:27:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:21.987 16:27:55 -- common/autotest_common.sh@10 -- # set +x 00:16:21.987 16:27:55 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:21.987 16:27:55 -- host/discovery.sh@63 -- # sort -n 00:16:21.987 16:27:55 -- host/discovery.sh@63 -- # xargs 00:16:21.987 [2024-04-17 16:27:56.004385] bdev_nvme.c:6717:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:21.987 [2024-04-17 16:27:56.004415] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:21.987 [2024-04-17 16:27:56.004423] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:21.987 16:27:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:22.246 16:27:56 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:16:22.246 16:27:56 -- common/autotest_common.sh@906 -- # sleep 1 00:16:23.183 16:27:57 -- common/autotest_common.sh@902 -- # (( max-- )) 00:16:23.183 16:27:57 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:16:23.183 16:27:57 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:16:23.183 16:27:57 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:23.183 16:27:57 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:23.183 16:27:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:23.183 16:27:57 -- common/autotest_common.sh@10 -- # set +x 00:16:23.183 16:27:57 -- host/discovery.sh@63 -- # sort -n 00:16:23.183 16:27:57 -- host/discovery.sh@63 -- # xargs 00:16:23.183 16:27:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:23.183 16:27:57 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:16:23.183 16:27:57 -- common/autotest_common.sh@904 -- # return 0 00:16:23.183 16:27:57 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:16:23.183 16:27:57 -- host/discovery.sh@79 -- # expected_count=0 00:16:23.183 16:27:57 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:23.183 16:27:57 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:23.183 16:27:57 -- common/autotest_common.sh@901 -- # local max=10 00:16:23.183 16:27:57 -- common/autotest_common.sh@902 -- # (( max-- )) 00:16:23.183 16:27:57 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:23.183 16:27:57 -- common/autotest_common.sh@903 -- # get_notification_count 00:16:23.183 16:27:57 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:23.183 16:27:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:23.183 16:27:57 -- common/autotest_common.sh@10 -- # set +x 00:16:23.183 16:27:57 -- host/discovery.sh@74 -- # jq '. | length' 00:16:23.183 16:27:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:23.183 16:27:57 -- host/discovery.sh@74 -- # notification_count=0 00:16:23.183 16:27:57 -- host/discovery.sh@75 -- # notify_id=2 00:16:23.183 16:27:57 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:16:23.183 16:27:57 -- common/autotest_common.sh@904 -- # return 0 00:16:23.183 16:27:57 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:23.183 16:27:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:23.183 16:27:57 -- common/autotest_common.sh@10 -- # set +x 00:16:23.183 [2024-04-17 16:27:57.173959] bdev_nvme.c:6880:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:16:23.183 [2024-04-17 16:27:57.174019] bdev_nvme.c:6861:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:23.183 16:27:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:23.183 16:27:57 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:23.183 16:27:57 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:23.183 16:27:57 -- common/autotest_common.sh@901 -- # local max=10 00:16:23.183 16:27:57 -- common/autotest_common.sh@902 -- # (( max-- )) 00:16:23.183 16:27:57 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:23.183 [2024-04-17 16:27:57.180367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:23.183 [2024-04-17 16:27:57.180408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.183 [2024-04-17 16:27:57.180430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:23.183 [2024-04-17 16:27:57.180440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.183 [2024-04-17 16:27:57.180450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:23.183 [2024-04-17 16:27:57.180460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.183 [2024-04-17 16:27:57.180471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:23.183 [2024-04-17 16:27:57.180481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.183 [2024-04-17 16:27:57.180491] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204ca10 is same with the state(5) to be set 00:16:23.183 16:27:57 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:16:23.183 16:27:57 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:23.183 16:27:57 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:23.183 16:27:57 -- host/discovery.sh@59 -- # sort 00:16:23.183 16:27:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:23.183 16:27:57 -- common/autotest_common.sh@10 -- # set +x 00:16:23.183 16:27:57 -- host/discovery.sh@59 -- # xargs 00:16:23.183 [2024-04-17 16:27:57.190324] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x204ca10 (9): Bad file descriptor 00:16:23.183 16:27:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:23.183 [2024-04-17 16:27:57.200342] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:23.183 [2024-04-17 16:27:57.200514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:23.183 [2024-04-17 16:27:57.200567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:23.183 [2024-04-17 16:27:57.200584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x204ca10 with addr=10.0.0.2, port=4420 00:16:23.183 [2024-04-17 16:27:57.200596] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204ca10 is same with the state(5) to be set 00:16:23.183 [2024-04-17 16:27:57.200616] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x204ca10 (9): Bad file descriptor 00:16:23.183 [2024-04-17 16:27:57.200632] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:23.183 [2024-04-17 16:27:57.200642] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:23.183 [2024-04-17 16:27:57.200654] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:23.183 [2024-04-17 16:27:57.200670] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:23.183 [2024-04-17 16:27:57.210423] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:23.183 [2024-04-17 16:27:57.210542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:23.183 [2024-04-17 16:27:57.210589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:23.183 [2024-04-17 16:27:57.210606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x204ca10 with addr=10.0.0.2, port=4420 00:16:23.183 [2024-04-17 16:27:57.210618] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204ca10 is same with the state(5) to be set 00:16:23.183 [2024-04-17 16:27:57.210635] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x204ca10 (9): Bad file descriptor 00:16:23.183 [2024-04-17 16:27:57.210650] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:23.183 [2024-04-17 16:27:57.210659] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:23.183 [2024-04-17 16:27:57.210669] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:23.183 [2024-04-17 16:27:57.210685] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:23.183 [2024-04-17 16:27:57.220500] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:23.183 [2024-04-17 16:27:57.220630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:23.183 [2024-04-17 16:27:57.220680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:23.183 [2024-04-17 16:27:57.220697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x204ca10 with addr=10.0.0.2, port=4420 00:16:23.183 [2024-04-17 16:27:57.220708] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204ca10 is same with the state(5) to be set 00:16:23.183 [2024-04-17 16:27:57.220725] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x204ca10 (9): Bad file descriptor 00:16:23.183 [2024-04-17 16:27:57.220755] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:23.183 [2024-04-17 16:27:57.220765] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:23.183 [2024-04-17 16:27:57.220775] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:23.183 [2024-04-17 16:27:57.220790] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:23.442 [2024-04-17 16:27:57.230601] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:23.442 [2024-04-17 16:27:57.230738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:23.442 [2024-04-17 16:27:57.230805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:23.442 [2024-04-17 16:27:57.230825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x204ca10 with addr=10.0.0.2, port=4420 00:16:23.442 [2024-04-17 16:27:57.230837] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204ca10 is same with the state(5) to be set 00:16:23.442 [2024-04-17 16:27:57.230855] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x204ca10 (9): Bad file descriptor 00:16:23.442 [2024-04-17 16:27:57.230870] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:23.442 [2024-04-17 16:27:57.230879] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:23.442 [2024-04-17 16:27:57.230889] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:23.442 [2024-04-17 16:27:57.230905] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:23.442 16:27:57 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.442 16:27:57 -- common/autotest_common.sh@904 -- # return 0 00:16:23.442 16:27:57 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:23.442 16:27:57 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:23.442 16:27:57 -- common/autotest_common.sh@901 -- # local max=10 00:16:23.442 16:27:57 -- common/autotest_common.sh@902 -- # (( max-- )) 00:16:23.442 16:27:57 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:23.442 [2024-04-17 16:27:57.240694] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:23.442 [2024-04-17 16:27:57.240846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:23.442 [2024-04-17 16:27:57.240896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:23.442 [2024-04-17 16:27:57.240913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x204ca10 with addr=10.0.0.2, port=4420 00:16:23.442 [2024-04-17 16:27:57.240924] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204ca10 is same with the state(5) to be set 00:16:23.442 [2024-04-17 16:27:57.240946] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x204ca10 (9): Bad file descriptor 00:16:23.442 [2024-04-17 16:27:57.240961] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:23.442 [2024-04-17 16:27:57.240970] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:23.442 16:27:57 -- common/autotest_common.sh@903 -- # get_bdev_list 00:16:23.442 [2024-04-17 16:27:57.240980] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:23.442 [2024-04-17 16:27:57.240995] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:23.442 16:27:57 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:23.442 16:27:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:23.442 16:27:57 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:23.442 16:27:57 -- common/autotest_common.sh@10 -- # set +x 00:16:23.442 16:27:57 -- host/discovery.sh@55 -- # sort 00:16:23.442 16:27:57 -- host/discovery.sh@55 -- # xargs 00:16:23.442 [2024-04-17 16:27:57.250810] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:23.442 [2024-04-17 16:27:57.250911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:23.442 [2024-04-17 16:27:57.250960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:23.442 [2024-04-17 16:27:57.250978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x204ca10 with addr=10.0.0.2, port=4420 00:16:23.442 [2024-04-17 16:27:57.250990] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204ca10 is same with the state(5) to be set 00:16:23.442 [2024-04-17 16:27:57.251008] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x204ca10 (9): Bad file descriptor 00:16:23.442 [2024-04-17 16:27:57.251023] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:23.442 [2024-04-17 16:27:57.251033] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:23.442 [2024-04-17 16:27:57.251043] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:23.442 [2024-04-17 16:27:57.251070] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:23.442 [2024-04-17 16:27:57.260070] bdev_nvme.c:6685:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:16:23.442 [2024-04-17 16:27:57.260115] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:23.442 16:27:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:23.442 16:27:57 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:23.442 16:27:57 -- common/autotest_common.sh@904 -- # return 0 00:16:23.442 16:27:57 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:23.443 16:27:57 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:23.443 16:27:57 -- common/autotest_common.sh@901 -- # local max=10 00:16:23.443 16:27:57 -- common/autotest_common.sh@902 -- # (( max-- )) 00:16:23.443 16:27:57 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:16:23.443 16:27:57 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:16:23.443 16:27:57 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:23.443 16:27:57 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:23.443 16:27:57 -- host/discovery.sh@63 -- # sort -n 00:16:23.443 16:27:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:23.443 16:27:57 -- host/discovery.sh@63 -- # xargs 00:16:23.443 16:27:57 -- common/autotest_common.sh@10 -- # set +x 00:16:23.443 16:27:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:23.443 16:27:57 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:16:23.443 16:27:57 -- common/autotest_common.sh@904 -- # return 0 00:16:23.443 16:27:57 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:16:23.443 16:27:57 -- host/discovery.sh@79 -- # expected_count=0 00:16:23.443 16:27:57 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:23.443 16:27:57 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:23.443 16:27:57 -- common/autotest_common.sh@901 -- # local max=10 00:16:23.443 16:27:57 -- common/autotest_common.sh@902 -- # (( max-- )) 00:16:23.443 16:27:57 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:23.443 16:27:57 -- common/autotest_common.sh@903 -- # get_notification_count 00:16:23.443 16:27:57 -- host/discovery.sh@74 -- # jq '. | length' 00:16:23.443 16:27:57 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:23.443 16:27:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:23.443 16:27:57 -- common/autotest_common.sh@10 -- # set +x 00:16:23.443 16:27:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:23.443 16:27:57 -- host/discovery.sh@74 -- # notification_count=0 00:16:23.443 16:27:57 -- host/discovery.sh@75 -- # notify_id=2 00:16:23.443 16:27:57 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:16:23.443 16:27:57 -- common/autotest_common.sh@904 -- # return 0 00:16:23.443 16:27:57 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:16:23.443 16:27:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:23.443 16:27:57 -- common/autotest_common.sh@10 -- # set +x 00:16:23.443 16:27:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:23.443 16:27:57 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:16:23.443 16:27:57 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:16:23.443 16:27:57 -- common/autotest_common.sh@901 -- # local max=10 00:16:23.443 16:27:57 -- common/autotest_common.sh@902 -- # (( max-- )) 00:16:23.443 16:27:57 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:16:23.443 16:27:57 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:16:23.443 16:27:57 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:23.443 16:27:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:23.443 16:27:57 -- common/autotest_common.sh@10 -- # set +x 00:16:23.443 16:27:57 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:23.443 16:27:57 -- host/discovery.sh@59 -- # sort 00:16:23.443 16:27:57 -- host/discovery.sh@59 -- # xargs 00:16:23.443 16:27:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:23.443 16:27:57 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:16:23.443 16:27:57 -- common/autotest_common.sh@904 -- # return 0 00:16:23.443 16:27:57 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:16:23.443 16:27:57 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:16:23.443 16:27:57 -- common/autotest_common.sh@901 -- # local max=10 00:16:23.443 16:27:57 -- common/autotest_common.sh@902 -- # (( max-- )) 00:16:23.443 16:27:57 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:16:23.701 16:27:57 -- common/autotest_common.sh@903 -- # get_bdev_list 00:16:23.701 16:27:57 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:23.701 16:27:57 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:23.701 16:27:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:23.701 16:27:57 -- host/discovery.sh@55 -- # sort 00:16:23.701 16:27:57 -- common/autotest_common.sh@10 -- # set +x 00:16:23.701 16:27:57 -- host/discovery.sh@55 -- # xargs 00:16:23.701 16:27:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:23.701 16:27:57 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:16:23.701 16:27:57 -- common/autotest_common.sh@904 -- # return 0 00:16:23.701 16:27:57 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:16:23.701 16:27:57 -- host/discovery.sh@79 -- # expected_count=2 00:16:23.701 16:27:57 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:23.701 16:27:57 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:23.701 16:27:57 -- common/autotest_common.sh@901 -- # local max=10 00:16:23.701 16:27:57 -- common/autotest_common.sh@902 -- # (( max-- )) 00:16:23.701 16:27:57 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:23.701 16:27:57 -- common/autotest_common.sh@903 -- # get_notification_count 00:16:23.702 16:27:57 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:23.702 16:27:57 -- host/discovery.sh@74 -- # jq '. | length' 00:16:23.702 16:27:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:23.702 16:27:57 -- common/autotest_common.sh@10 -- # set +x 00:16:23.702 16:27:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:23.702 16:27:57 -- host/discovery.sh@74 -- # notification_count=2 00:16:23.702 16:27:57 -- host/discovery.sh@75 -- # notify_id=4 00:16:23.702 16:27:57 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:16:23.702 16:27:57 -- common/autotest_common.sh@904 -- # return 0 00:16:23.702 16:27:57 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:23.702 16:27:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:23.702 16:27:57 -- common/autotest_common.sh@10 -- # set +x 00:16:24.640 [2024-04-17 16:27:58.610934] bdev_nvme.c:6898:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:24.640 [2024-04-17 16:27:58.610979] bdev_nvme.c:6978:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:24.640 [2024-04-17 16:27:58.611014] bdev_nvme.c:6861:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:24.898 [2024-04-17 16:27:58.697137] bdev_nvme.c:6827:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:16:24.898 [2024-04-17 16:27:58.756690] bdev_nvme.c:6717:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:24.898 [2024-04-17 16:27:58.756766] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:24.898 16:27:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.898 16:27:58 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:24.898 16:27:58 -- common/autotest_common.sh@638 -- # local es=0 00:16:24.898 16:27:58 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:24.898 16:27:58 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:16:24.898 16:27:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:24.898 16:27:58 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:16:24.898 16:27:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:24.898 16:27:58 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:24.898 16:27:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.898 16:27:58 -- common/autotest_common.sh@10 -- # set +x 00:16:24.898 2024/04/17 16:27:58 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:16:24.898 request: 00:16:24.898 { 00:16:24.898 "method": "bdev_nvme_start_discovery", 00:16:24.898 "params": { 00:16:24.898 "name": "nvme", 00:16:24.898 "trtype": "tcp", 00:16:24.898 "traddr": "10.0.0.2", 00:16:24.898 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:24.898 "adrfam": "ipv4", 00:16:24.898 "trsvcid": "8009", 00:16:24.898 "wait_for_attach": true 00:16:24.898 } 00:16:24.898 } 00:16:24.898 Got JSON-RPC error response 00:16:24.898 GoRPCClient: error on JSON-RPC call 00:16:24.898 16:27:58 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:16:24.898 16:27:58 -- common/autotest_common.sh@641 -- # es=1 00:16:24.898 16:27:58 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:24.898 16:27:58 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:24.898 16:27:58 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:24.898 16:27:58 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:16:24.898 16:27:58 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:24.898 16:27:58 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:24.898 16:27:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.898 16:27:58 -- common/autotest_common.sh@10 -- # set +x 00:16:24.898 16:27:58 -- host/discovery.sh@67 -- # sort 00:16:24.898 16:27:58 -- host/discovery.sh@67 -- # xargs 00:16:24.898 16:27:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.898 16:27:58 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:16:24.898 16:27:58 -- host/discovery.sh@146 -- # get_bdev_list 00:16:24.898 16:27:58 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:24.898 16:27:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.898 16:27:58 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:24.898 16:27:58 -- common/autotest_common.sh@10 -- # set +x 00:16:24.898 16:27:58 -- host/discovery.sh@55 -- # sort 00:16:24.898 16:27:58 -- host/discovery.sh@55 -- # xargs 00:16:24.898 16:27:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.898 16:27:58 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:24.898 16:27:58 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:24.898 16:27:58 -- common/autotest_common.sh@638 -- # local es=0 00:16:24.898 16:27:58 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:24.898 16:27:58 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:16:24.898 16:27:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:24.898 16:27:58 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:16:24.898 16:27:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:24.898 16:27:58 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:24.898 16:27:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.898 16:27:58 -- common/autotest_common.sh@10 -- # set +x 00:16:24.898 2024/04/17 16:27:58 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:16:24.898 request: 00:16:24.898 { 00:16:24.898 "method": "bdev_nvme_start_discovery", 00:16:24.898 "params": { 00:16:24.898 "name": "nvme_second", 00:16:24.898 "trtype": "tcp", 00:16:24.898 "traddr": "10.0.0.2", 00:16:24.898 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:24.898 "adrfam": "ipv4", 00:16:24.898 "trsvcid": "8009", 00:16:24.898 "wait_for_attach": true 00:16:24.898 } 00:16:24.898 } 00:16:24.898 Got JSON-RPC error response 00:16:24.898 GoRPCClient: error on JSON-RPC call 00:16:24.898 16:27:58 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:16:24.898 16:27:58 -- common/autotest_common.sh@641 -- # es=1 00:16:24.898 16:27:58 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:24.898 16:27:58 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:24.898 16:27:58 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:24.899 16:27:58 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:16:24.899 16:27:58 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:24.899 16:27:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.899 16:27:58 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:24.899 16:27:58 -- common/autotest_common.sh@10 -- # set +x 00:16:24.899 16:27:58 -- host/discovery.sh@67 -- # sort 00:16:24.899 16:27:58 -- host/discovery.sh@67 -- # xargs 00:16:24.899 16:27:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:25.158 16:27:58 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:16:25.158 16:27:58 -- host/discovery.sh@152 -- # get_bdev_list 00:16:25.158 16:27:58 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:25.158 16:27:58 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:25.158 16:27:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:25.158 16:27:58 -- host/discovery.sh@55 -- # sort 00:16:25.158 16:27:58 -- common/autotest_common.sh@10 -- # set +x 00:16:25.158 16:27:58 -- host/discovery.sh@55 -- # xargs 00:16:25.158 16:27:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:25.158 16:27:59 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:25.158 16:27:59 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:25.158 16:27:59 -- common/autotest_common.sh@638 -- # local es=0 00:16:25.158 16:27:59 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:25.158 16:27:59 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:16:25.158 16:27:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:25.158 16:27:59 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:16:25.158 16:27:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:25.158 16:27:59 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:25.158 16:27:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:25.158 16:27:59 -- common/autotest_common.sh@10 -- # set +x 00:16:26.093 [2024-04-17 16:28:00.034349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:26.093 [2024-04-17 16:28:00.034444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:26.093 [2024-04-17 16:28:00.034465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20babf0 with addr=10.0.0.2, port=8010 00:16:26.093 [2024-04-17 16:28:00.034490] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:26.093 [2024-04-17 16:28:00.034502] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:26.093 [2024-04-17 16:28:00.034512] bdev_nvme.c:6960:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:27.029 [2024-04-17 16:28:01.034407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:27.029 [2024-04-17 16:28:01.034518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:27.029 [2024-04-17 16:28:01.034539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20babf0 with addr=10.0.0.2, port=8010 00:16:27.029 [2024-04-17 16:28:01.034564] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:27.029 [2024-04-17 16:28:01.034576] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:27.029 [2024-04-17 16:28:01.034588] bdev_nvme.c:6960:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:28.018 [2024-04-17 16:28:02.034203] bdev_nvme.c:6941:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:16:28.018 2024/04/17 16:28:02 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:16:28.018 request: 00:16:28.018 { 00:16:28.018 "method": "bdev_nvme_start_discovery", 00:16:28.018 "params": { 00:16:28.018 "name": "nvme_second", 00:16:28.018 "trtype": "tcp", 00:16:28.018 "traddr": "10.0.0.2", 00:16:28.018 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:28.018 "adrfam": "ipv4", 00:16:28.018 "trsvcid": "8010", 00:16:28.018 "attach_timeout_ms": 3000 00:16:28.018 } 00:16:28.018 } 00:16:28.018 Got JSON-RPC error response 00:16:28.018 GoRPCClient: error on JSON-RPC call 00:16:28.018 16:28:02 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:16:28.018 16:28:02 -- common/autotest_common.sh@641 -- # es=1 00:16:28.018 16:28:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:28.018 16:28:02 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:28.018 16:28:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:28.018 16:28:02 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:16:28.018 16:28:02 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:28.018 16:28:02 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:28.018 16:28:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:28.018 16:28:02 -- common/autotest_common.sh@10 -- # set +x 00:16:28.018 16:28:02 -- host/discovery.sh@67 -- # sort 00:16:28.018 16:28:02 -- host/discovery.sh@67 -- # xargs 00:16:28.018 16:28:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:28.278 16:28:02 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:16:28.278 16:28:02 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:16:28.278 16:28:02 -- host/discovery.sh@161 -- # kill 82234 00:16:28.278 16:28:02 -- host/discovery.sh@162 -- # nvmftestfini 00:16:28.278 16:28:02 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:28.278 16:28:02 -- nvmf/common.sh@117 -- # sync 00:16:28.278 16:28:02 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:28.278 16:28:02 -- nvmf/common.sh@120 -- # set +e 00:16:28.278 16:28:02 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:28.278 16:28:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:28.278 rmmod nvme_tcp 00:16:28.278 rmmod nvme_fabrics 00:16:28.278 rmmod nvme_keyring 00:16:28.278 16:28:02 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:28.278 16:28:02 -- nvmf/common.sh@124 -- # set -e 00:16:28.278 16:28:02 -- nvmf/common.sh@125 -- # return 0 00:16:28.278 16:28:02 -- nvmf/common.sh@478 -- # '[' -n 82178 ']' 00:16:28.278 16:28:02 -- nvmf/common.sh@479 -- # killprocess 82178 00:16:28.278 16:28:02 -- common/autotest_common.sh@936 -- # '[' -z 82178 ']' 00:16:28.278 16:28:02 -- common/autotest_common.sh@940 -- # kill -0 82178 00:16:28.278 16:28:02 -- common/autotest_common.sh@941 -- # uname 00:16:28.278 16:28:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:28.278 16:28:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82178 00:16:28.278 16:28:02 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:28.278 killing process with pid 82178 00:16:28.278 16:28:02 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:28.278 16:28:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82178' 00:16:28.278 16:28:02 -- common/autotest_common.sh@955 -- # kill 82178 00:16:28.278 16:28:02 -- common/autotest_common.sh@960 -- # wait 82178 00:16:28.537 16:28:02 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:28.537 16:28:02 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:28.537 16:28:02 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:28.537 16:28:02 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:28.537 16:28:02 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:28.537 16:28:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.537 16:28:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:28.537 16:28:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.537 16:28:02 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:28.537 00:16:28.537 real 0m11.431s 00:16:28.537 user 0m22.581s 00:16:28.537 sys 0m1.754s 00:16:28.537 16:28:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:28.537 16:28:02 -- common/autotest_common.sh@10 -- # set +x 00:16:28.537 ************************************ 00:16:28.537 END TEST nvmf_discovery 00:16:28.537 ************************************ 00:16:28.796 16:28:02 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:28.796 16:28:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:28.796 16:28:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:28.796 16:28:02 -- common/autotest_common.sh@10 -- # set +x 00:16:28.796 ************************************ 00:16:28.796 START TEST nvmf_discovery_remove_ifc 00:16:28.796 ************************************ 00:16:28.796 16:28:02 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:28.796 * Looking for test storage... 00:16:28.796 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:28.796 16:28:02 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:28.796 16:28:02 -- nvmf/common.sh@7 -- # uname -s 00:16:28.796 16:28:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:28.796 16:28:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:28.796 16:28:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:28.796 16:28:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:28.796 16:28:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:28.796 16:28:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:28.796 16:28:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:28.796 16:28:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:28.796 16:28:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:28.796 16:28:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:28.796 16:28:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:16:28.796 16:28:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:16:28.796 16:28:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:28.796 16:28:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:28.796 16:28:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:28.796 16:28:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:28.796 16:28:02 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:28.796 16:28:02 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:28.796 16:28:02 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:28.796 16:28:02 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:28.796 16:28:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.796 16:28:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.797 16:28:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.797 16:28:02 -- paths/export.sh@5 -- # export PATH 00:16:28.797 16:28:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.797 16:28:02 -- nvmf/common.sh@47 -- # : 0 00:16:28.797 16:28:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:28.797 16:28:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:28.797 16:28:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:28.797 16:28:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:28.797 16:28:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:28.797 16:28:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:28.797 16:28:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:28.797 16:28:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:28.797 16:28:02 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:16:28.797 16:28:02 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:16:28.797 16:28:02 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:16:28.797 16:28:02 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:28.797 16:28:02 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:16:28.797 16:28:02 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:16:28.797 16:28:02 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:16:28.797 16:28:02 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:28.797 16:28:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:28.797 16:28:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:28.797 16:28:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:28.797 16:28:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:28.797 16:28:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.797 16:28:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:28.797 16:28:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.797 16:28:02 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:16:28.797 16:28:02 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:16:28.797 16:28:02 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:16:28.797 16:28:02 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:16:28.797 16:28:02 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:16:28.797 16:28:02 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:16:28.797 16:28:02 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:28.797 16:28:02 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:28.797 16:28:02 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:28.797 16:28:02 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:28.797 16:28:02 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:28.797 16:28:02 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:28.797 16:28:02 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:28.797 16:28:02 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:28.797 16:28:02 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:28.797 16:28:02 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:28.797 16:28:02 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:28.797 16:28:02 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:28.797 16:28:02 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:28.797 16:28:02 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:28.797 Cannot find device "nvmf_tgt_br" 00:16:28.797 16:28:02 -- nvmf/common.sh@155 -- # true 00:16:28.797 16:28:02 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:28.797 Cannot find device "nvmf_tgt_br2" 00:16:28.797 16:28:02 -- nvmf/common.sh@156 -- # true 00:16:28.797 16:28:02 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:28.797 16:28:02 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:29.056 Cannot find device "nvmf_tgt_br" 00:16:29.056 16:28:02 -- nvmf/common.sh@158 -- # true 00:16:29.056 16:28:02 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:29.056 Cannot find device "nvmf_tgt_br2" 00:16:29.056 16:28:02 -- nvmf/common.sh@159 -- # true 00:16:29.056 16:28:02 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:29.056 16:28:02 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:29.056 16:28:02 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:29.056 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:29.056 16:28:02 -- nvmf/common.sh@162 -- # true 00:16:29.056 16:28:02 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:29.056 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:29.056 16:28:02 -- nvmf/common.sh@163 -- # true 00:16:29.056 16:28:02 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:29.056 16:28:02 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:29.056 16:28:02 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:29.056 16:28:02 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:29.056 16:28:02 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:29.056 16:28:02 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:29.056 16:28:02 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:29.056 16:28:02 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:29.056 16:28:03 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:29.056 16:28:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:29.056 16:28:03 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:29.056 16:28:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:29.056 16:28:03 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:29.056 16:28:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:29.056 16:28:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:29.056 16:28:03 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:29.056 16:28:03 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:29.056 16:28:03 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:29.056 16:28:03 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:29.056 16:28:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:29.056 16:28:03 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:29.315 16:28:03 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:29.315 16:28:03 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:29.315 16:28:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:29.315 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:29.315 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:16:29.315 00:16:29.315 --- 10.0.0.2 ping statistics --- 00:16:29.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:29.315 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:16:29.315 16:28:03 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:29.315 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:29.315 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:16:29.315 00:16:29.315 --- 10.0.0.3 ping statistics --- 00:16:29.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:29.315 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:16:29.315 16:28:03 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:29.315 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:29.315 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:16:29.315 00:16:29.315 --- 10.0.0.1 ping statistics --- 00:16:29.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:29.315 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:16:29.315 16:28:03 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:29.315 16:28:03 -- nvmf/common.sh@422 -- # return 0 00:16:29.315 16:28:03 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:29.315 16:28:03 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:29.315 16:28:03 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:29.315 16:28:03 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:29.315 16:28:03 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:29.315 16:28:03 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:29.315 16:28:03 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:29.315 16:28:03 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:16:29.315 16:28:03 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:29.315 16:28:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:29.315 16:28:03 -- common/autotest_common.sh@10 -- # set +x 00:16:29.315 16:28:03 -- nvmf/common.sh@470 -- # nvmfpid=82725 00:16:29.315 16:28:03 -- nvmf/common.sh@471 -- # waitforlisten 82725 00:16:29.315 16:28:03 -- common/autotest_common.sh@817 -- # '[' -z 82725 ']' 00:16:29.315 16:28:03 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:29.315 16:28:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.315 16:28:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:29.315 16:28:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.315 16:28:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:29.315 16:28:03 -- common/autotest_common.sh@10 -- # set +x 00:16:29.315 [2024-04-17 16:28:03.223760] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:16:29.315 [2024-04-17 16:28:03.223904] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:29.575 [2024-04-17 16:28:03.366440] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.575 [2024-04-17 16:28:03.508007] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:29.575 [2024-04-17 16:28:03.508056] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:29.575 [2024-04-17 16:28:03.508069] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:29.575 [2024-04-17 16:28:03.508079] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:29.575 [2024-04-17 16:28:03.508088] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:29.575 [2024-04-17 16:28:03.508124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:30.511 16:28:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:30.511 16:28:04 -- common/autotest_common.sh@850 -- # return 0 00:16:30.511 16:28:04 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:30.512 16:28:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:30.512 16:28:04 -- common/autotest_common.sh@10 -- # set +x 00:16:30.512 16:28:04 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:30.512 16:28:04 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:16:30.512 16:28:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:30.512 16:28:04 -- common/autotest_common.sh@10 -- # set +x 00:16:30.512 [2024-04-17 16:28:04.324905] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:30.512 [2024-04-17 16:28:04.333030] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:30.512 null0 00:16:30.512 [2024-04-17 16:28:04.364956] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:30.512 16:28:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:30.512 16:28:04 -- host/discovery_remove_ifc.sh@59 -- # hostpid=82775 00:16:30.512 16:28:04 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:16:30.512 16:28:04 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 82775 /tmp/host.sock 00:16:30.512 16:28:04 -- common/autotest_common.sh@817 -- # '[' -z 82775 ']' 00:16:30.512 16:28:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:16:30.512 16:28:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:30.512 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:30.512 16:28:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:30.512 16:28:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:30.512 16:28:04 -- common/autotest_common.sh@10 -- # set +x 00:16:30.512 [2024-04-17 16:28:04.447582] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:16:30.512 [2024-04-17 16:28:04.447704] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82775 ] 00:16:30.770 [2024-04-17 16:28:04.593237] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.770 [2024-04-17 16:28:04.724115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.708 16:28:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:31.708 16:28:05 -- common/autotest_common.sh@850 -- # return 0 00:16:31.708 16:28:05 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:31.708 16:28:05 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:16:31.708 16:28:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:31.708 16:28:05 -- common/autotest_common.sh@10 -- # set +x 00:16:31.708 16:28:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:31.708 16:28:05 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:16:31.708 16:28:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:31.708 16:28:05 -- common/autotest_common.sh@10 -- # set +x 00:16:31.708 16:28:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:31.708 16:28:05 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:16:31.708 16:28:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:31.708 16:28:05 -- common/autotest_common.sh@10 -- # set +x 00:16:32.691 [2024-04-17 16:28:06.588138] bdev_nvme.c:6898:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:32.691 [2024-04-17 16:28:06.588180] bdev_nvme.c:6978:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:32.691 [2024-04-17 16:28:06.588216] bdev_nvme.c:6861:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:32.691 [2024-04-17 16:28:06.674312] bdev_nvme.c:6827:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:32.691 [2024-04-17 16:28:06.730759] bdev_nvme.c:7688:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:32.691 [2024-04-17 16:28:06.730863] bdev_nvme.c:7688:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:32.691 [2024-04-17 16:28:06.730902] bdev_nvme.c:7688:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:32.691 [2024-04-17 16:28:06.730920] bdev_nvme.c:6717:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:32.691 [2024-04-17 16:28:06.730947] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:32.691 16:28:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:32.691 16:28:06 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:16:32.691 16:28:06 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:32.950 16:28:06 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:32.950 16:28:06 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:32.950 [2024-04-17 16:28:06.736712] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1b6a930 was disconnected and freed. delete nvme_qpair. 00:16:32.950 16:28:06 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:32.950 16:28:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:32.950 16:28:06 -- common/autotest_common.sh@10 -- # set +x 00:16:32.950 16:28:06 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:32.950 16:28:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:32.950 16:28:06 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:16:32.950 16:28:06 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:16:32.950 16:28:06 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:16:32.950 16:28:06 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:16:32.950 16:28:06 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:32.950 16:28:06 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:32.950 16:28:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:32.950 16:28:06 -- common/autotest_common.sh@10 -- # set +x 00:16:32.950 16:28:06 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:32.950 16:28:06 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:32.950 16:28:06 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:32.950 16:28:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:32.950 16:28:06 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:32.950 16:28:06 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:33.887 16:28:07 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:33.887 16:28:07 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:33.887 16:28:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:33.887 16:28:07 -- common/autotest_common.sh@10 -- # set +x 00:16:33.887 16:28:07 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:33.887 16:28:07 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:33.887 16:28:07 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:33.887 16:28:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:33.887 16:28:07 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:33.887 16:28:07 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:35.265 16:28:08 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:35.265 16:28:08 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:35.265 16:28:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:35.265 16:28:08 -- common/autotest_common.sh@10 -- # set +x 00:16:35.265 16:28:08 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:35.265 16:28:08 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:35.265 16:28:08 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:35.265 16:28:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:35.265 16:28:08 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:35.265 16:28:08 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:36.200 16:28:09 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:36.200 16:28:09 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:36.200 16:28:09 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:36.200 16:28:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:36.200 16:28:09 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:36.200 16:28:09 -- common/autotest_common.sh@10 -- # set +x 00:16:36.200 16:28:09 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:36.200 16:28:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:36.200 16:28:10 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:36.200 16:28:10 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:37.135 16:28:11 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:37.135 16:28:11 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:37.135 16:28:11 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:37.135 16:28:11 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:37.135 16:28:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:37.135 16:28:11 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:37.135 16:28:11 -- common/autotest_common.sh@10 -- # set +x 00:16:37.135 16:28:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:37.135 16:28:11 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:37.135 16:28:11 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:38.101 16:28:12 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:38.101 16:28:12 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:38.101 16:28:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:38.101 16:28:12 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:38.101 16:28:12 -- common/autotest_common.sh@10 -- # set +x 00:16:38.101 16:28:12 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:38.359 16:28:12 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:38.359 [2024-04-17 16:28:12.158428] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:16:38.359 [2024-04-17 16:28:12.158503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:38.359 [2024-04-17 16:28:12.158519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.359 [2024-04-17 16:28:12.158533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:38.359 [2024-04-17 16:28:12.158543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.359 [2024-04-17 16:28:12.158553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:38.359 [2024-04-17 16:28:12.158562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.359 [2024-04-17 16:28:12.158573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:38.359 [2024-04-17 16:28:12.158582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.359 [2024-04-17 16:28:12.158593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:38.359 [2024-04-17 16:28:12.158602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.359 [2024-04-17 16:28:12.158611] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adc5f0 is same with the state(5) to be set 00:16:38.359 16:28:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:38.359 [2024-04-17 16:28:12.168420] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1adc5f0 (9): Bad file descriptor 00:16:38.359 [2024-04-17 16:28:12.178452] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:38.359 16:28:12 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:38.359 16:28:12 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:39.293 16:28:13 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:39.293 16:28:13 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:39.293 16:28:13 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:39.293 16:28:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:39.293 16:28:13 -- common/autotest_common.sh@10 -- # set +x 00:16:39.293 16:28:13 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:39.293 16:28:13 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:39.293 [2024-04-17 16:28:13.225907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:16:40.226 [2024-04-17 16:28:14.249938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:16:40.226 [2024-04-17 16:28:14.250104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1adc5f0 with addr=10.0.0.2, port=4420 00:16:40.226 [2024-04-17 16:28:14.250141] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adc5f0 is same with the state(5) to be set 00:16:40.226 [2024-04-17 16:28:14.251167] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1adc5f0 (9): Bad file descriptor 00:16:40.226 [2024-04-17 16:28:14.251257] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:40.226 [2024-04-17 16:28:14.251307] bdev_nvme.c:6649:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:16:40.227 [2024-04-17 16:28:14.251370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:40.227 [2024-04-17 16:28:14.251404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:40.227 [2024-04-17 16:28:14.251438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:40.227 [2024-04-17 16:28:14.251468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:40.227 [2024-04-17 16:28:14.251498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:40.227 [2024-04-17 16:28:14.251520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:40.227 [2024-04-17 16:28:14.251539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:40.227 [2024-04-17 16:28:14.251559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:40.227 [2024-04-17 16:28:14.251579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:40.227 [2024-04-17 16:28:14.251596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:40.227 [2024-04-17 16:28:14.251615] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:16:40.227 [2024-04-17 16:28:14.251644] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1adb470 (9): Bad file descriptor 00:16:40.227 [2024-04-17 16:28:14.252283] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:16:40.227 [2024-04-17 16:28:14.252330] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:16:40.227 16:28:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:40.485 16:28:14 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:40.485 16:28:14 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:41.429 16:28:15 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:41.429 16:28:15 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:41.429 16:28:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:41.429 16:28:15 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:41.429 16:28:15 -- common/autotest_common.sh@10 -- # set +x 00:16:41.429 16:28:15 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:41.429 16:28:15 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:41.429 16:28:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:41.429 16:28:15 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:16:41.429 16:28:15 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:41.429 16:28:15 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:41.429 16:28:15 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:16:41.429 16:28:15 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:41.429 16:28:15 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:41.430 16:28:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:41.430 16:28:15 -- common/autotest_common.sh@10 -- # set +x 00:16:41.430 16:28:15 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:41.430 16:28:15 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:41.430 16:28:15 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:41.430 16:28:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:41.430 16:28:15 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:41.430 16:28:15 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:42.379 [2024-04-17 16:28:16.259902] bdev_nvme.c:6898:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:42.379 [2024-04-17 16:28:16.259948] bdev_nvme.c:6978:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:42.379 [2024-04-17 16:28:16.259970] bdev_nvme.c:6861:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:42.379 [2024-04-17 16:28:16.346060] bdev_nvme.c:6827:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:16:42.379 [2024-04-17 16:28:16.401341] bdev_nvme.c:7688:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:42.379 [2024-04-17 16:28:16.401405] bdev_nvme.c:7688:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:42.379 [2024-04-17 16:28:16.401431] bdev_nvme.c:7688:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:42.379 [2024-04-17 16:28:16.401449] bdev_nvme.c:6717:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:16:42.379 [2024-04-17 16:28:16.401459] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:42.379 [2024-04-17 16:28:16.408363] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1b422c0 was disconnected and freed. delete nvme_qpair. 00:16:42.379 16:28:16 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:42.379 16:28:16 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:42.379 16:28:16 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:42.379 16:28:16 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:42.379 16:28:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:42.379 16:28:16 -- common/autotest_common.sh@10 -- # set +x 00:16:42.379 16:28:16 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:42.637 16:28:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:42.637 16:28:16 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:16:42.637 16:28:16 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:16:42.637 16:28:16 -- host/discovery_remove_ifc.sh@90 -- # killprocess 82775 00:16:42.637 16:28:16 -- common/autotest_common.sh@936 -- # '[' -z 82775 ']' 00:16:42.637 16:28:16 -- common/autotest_common.sh@940 -- # kill -0 82775 00:16:42.637 16:28:16 -- common/autotest_common.sh@941 -- # uname 00:16:42.637 16:28:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:42.637 16:28:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82775 00:16:42.637 16:28:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:42.637 16:28:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:42.637 killing process with pid 82775 00:16:42.637 16:28:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82775' 00:16:42.637 16:28:16 -- common/autotest_common.sh@955 -- # kill 82775 00:16:42.637 16:28:16 -- common/autotest_common.sh@960 -- # wait 82775 00:16:42.895 16:28:16 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:16:42.895 16:28:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:42.895 16:28:16 -- nvmf/common.sh@117 -- # sync 00:16:42.895 16:28:16 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:42.895 16:28:16 -- nvmf/common.sh@120 -- # set +e 00:16:42.895 16:28:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:42.895 16:28:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:42.895 rmmod nvme_tcp 00:16:42.895 rmmod nvme_fabrics 00:16:42.895 rmmod nvme_keyring 00:16:42.895 16:28:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:42.895 16:28:16 -- nvmf/common.sh@124 -- # set -e 00:16:42.895 16:28:16 -- nvmf/common.sh@125 -- # return 0 00:16:42.895 16:28:16 -- nvmf/common.sh@478 -- # '[' -n 82725 ']' 00:16:42.895 16:28:16 -- nvmf/common.sh@479 -- # killprocess 82725 00:16:42.895 16:28:16 -- common/autotest_common.sh@936 -- # '[' -z 82725 ']' 00:16:42.895 16:28:16 -- common/autotest_common.sh@940 -- # kill -0 82725 00:16:42.895 16:28:16 -- common/autotest_common.sh@941 -- # uname 00:16:42.895 16:28:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:42.895 16:28:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82725 00:16:42.895 killing process with pid 82725 00:16:42.895 16:28:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:42.895 16:28:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:42.895 16:28:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82725' 00:16:42.895 16:28:16 -- common/autotest_common.sh@955 -- # kill 82725 00:16:42.895 16:28:16 -- common/autotest_common.sh@960 -- # wait 82725 00:16:43.152 16:28:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:43.152 16:28:17 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:43.152 16:28:17 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:43.152 16:28:17 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:43.152 16:28:17 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:43.152 16:28:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.152 16:28:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:43.152 16:28:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.152 16:28:17 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:43.152 ************************************ 00:16:43.152 END TEST nvmf_discovery_remove_ifc 00:16:43.152 ************************************ 00:16:43.152 00:16:43.152 real 0m14.484s 00:16:43.152 user 0m24.855s 00:16:43.152 sys 0m1.674s 00:16:43.152 16:28:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:43.152 16:28:17 -- common/autotest_common.sh@10 -- # set +x 00:16:43.152 16:28:17 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:43.152 16:28:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:43.153 16:28:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:43.153 16:28:17 -- common/autotest_common.sh@10 -- # set +x 00:16:43.410 ************************************ 00:16:43.410 START TEST nvmf_identify_kernel_target 00:16:43.410 ************************************ 00:16:43.410 16:28:17 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:43.410 * Looking for test storage... 00:16:43.410 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:43.410 16:28:17 -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:43.410 16:28:17 -- nvmf/common.sh@7 -- # uname -s 00:16:43.410 16:28:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:43.410 16:28:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:43.410 16:28:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:43.411 16:28:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:43.411 16:28:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:43.411 16:28:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:43.411 16:28:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:43.411 16:28:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:43.411 16:28:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:43.411 16:28:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:43.411 16:28:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:16:43.411 16:28:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:16:43.411 16:28:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:43.411 16:28:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:43.411 16:28:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:43.411 16:28:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:43.411 16:28:17 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:43.411 16:28:17 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:43.411 16:28:17 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:43.411 16:28:17 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:43.411 16:28:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.411 16:28:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.411 16:28:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.411 16:28:17 -- paths/export.sh@5 -- # export PATH 00:16:43.411 16:28:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.411 16:28:17 -- nvmf/common.sh@47 -- # : 0 00:16:43.411 16:28:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:43.411 16:28:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:43.411 16:28:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:43.411 16:28:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:43.411 16:28:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:43.411 16:28:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:43.411 16:28:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:43.411 16:28:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:43.411 16:28:17 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:16:43.411 16:28:17 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:43.411 16:28:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:43.411 16:28:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:43.411 16:28:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:43.411 16:28:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:43.411 16:28:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.411 16:28:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:43.411 16:28:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.411 16:28:17 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:16:43.411 16:28:17 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:16:43.411 16:28:17 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:16:43.411 16:28:17 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:16:43.411 16:28:17 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:16:43.411 16:28:17 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:16:43.411 16:28:17 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:43.411 16:28:17 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:43.411 16:28:17 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:43.411 16:28:17 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:43.411 16:28:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:43.411 16:28:17 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:43.411 16:28:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:43.411 16:28:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:43.411 16:28:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:43.411 16:28:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:43.411 16:28:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:43.411 16:28:17 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:43.411 16:28:17 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:43.411 16:28:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:43.411 Cannot find device "nvmf_tgt_br" 00:16:43.411 16:28:17 -- nvmf/common.sh@155 -- # true 00:16:43.411 16:28:17 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:43.411 Cannot find device "nvmf_tgt_br2" 00:16:43.411 16:28:17 -- nvmf/common.sh@156 -- # true 00:16:43.411 16:28:17 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:43.411 16:28:17 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:43.411 Cannot find device "nvmf_tgt_br" 00:16:43.411 16:28:17 -- nvmf/common.sh@158 -- # true 00:16:43.411 16:28:17 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:43.411 Cannot find device "nvmf_tgt_br2" 00:16:43.411 16:28:17 -- nvmf/common.sh@159 -- # true 00:16:43.411 16:28:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:43.669 16:28:17 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:43.669 16:28:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:43.669 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:43.669 16:28:17 -- nvmf/common.sh@162 -- # true 00:16:43.669 16:28:17 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:43.669 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:43.669 16:28:17 -- nvmf/common.sh@163 -- # true 00:16:43.669 16:28:17 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:43.669 16:28:17 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:43.669 16:28:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:43.669 16:28:17 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:43.669 16:28:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:43.669 16:28:17 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:43.669 16:28:17 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:43.669 16:28:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:43.669 16:28:17 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:43.669 16:28:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:43.669 16:28:17 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:43.669 16:28:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:43.669 16:28:17 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:43.669 16:28:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:43.669 16:28:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:43.669 16:28:17 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:43.669 16:28:17 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:43.669 16:28:17 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:43.669 16:28:17 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:43.669 16:28:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:43.669 16:28:17 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:43.670 16:28:17 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:43.670 16:28:17 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:43.670 16:28:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:43.670 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:43.670 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:16:43.670 00:16:43.670 --- 10.0.0.2 ping statistics --- 00:16:43.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.670 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:16:43.670 16:28:17 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:43.670 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:43.670 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:16:43.670 00:16:43.670 --- 10.0.0.3 ping statistics --- 00:16:43.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.670 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:16:43.670 16:28:17 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:43.670 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:43.670 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:16:43.670 00:16:43.670 --- 10.0.0.1 ping statistics --- 00:16:43.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.670 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:16:43.670 16:28:17 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:43.670 16:28:17 -- nvmf/common.sh@422 -- # return 0 00:16:43.670 16:28:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:43.670 16:28:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:43.670 16:28:17 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:43.670 16:28:17 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:43.670 16:28:17 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:43.670 16:28:17 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:43.670 16:28:17 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:43.926 16:28:17 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:16:43.926 16:28:17 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:16:43.926 16:28:17 -- nvmf/common.sh@717 -- # local ip 00:16:43.926 16:28:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:43.926 16:28:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:43.926 16:28:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:43.926 16:28:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:43.926 16:28:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:43.926 16:28:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:43.926 16:28:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:43.926 16:28:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:43.926 16:28:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:43.926 16:28:17 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:16:43.926 16:28:17 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:16:43.926 16:28:17 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:16:43.926 16:28:17 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:16:43.926 16:28:17 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:43.926 16:28:17 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:43.926 16:28:17 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:16:43.926 16:28:17 -- nvmf/common.sh@628 -- # local block nvme 00:16:43.926 16:28:17 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:16:43.926 16:28:17 -- nvmf/common.sh@631 -- # modprobe nvmet 00:16:43.926 16:28:17 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:16:43.926 16:28:17 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:44.183 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:44.183 Waiting for block devices as requested 00:16:44.183 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:44.442 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:44.442 16:28:18 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:16:44.442 16:28:18 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:44.442 16:28:18 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:16:44.442 16:28:18 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:16:44.442 16:28:18 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:44.442 16:28:18 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:44.442 16:28:18 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:16:44.442 16:28:18 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:16:44.442 16:28:18 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:44.442 No valid GPT data, bailing 00:16:44.442 16:28:18 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:44.442 16:28:18 -- scripts/common.sh@391 -- # pt= 00:16:44.442 16:28:18 -- scripts/common.sh@392 -- # return 1 00:16:44.442 16:28:18 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:16:44.442 16:28:18 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:16:44.442 16:28:18 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:16:44.442 16:28:18 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:16:44.442 16:28:18 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:16:44.442 16:28:18 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:44.442 16:28:18 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:44.442 16:28:18 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:16:44.442 16:28:18 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:16:44.442 16:28:18 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:16:44.442 No valid GPT data, bailing 00:16:44.442 16:28:18 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:16:44.442 16:28:18 -- scripts/common.sh@391 -- # pt= 00:16:44.442 16:28:18 -- scripts/common.sh@392 -- # return 1 00:16:44.442 16:28:18 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:16:44.442 16:28:18 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:16:44.442 16:28:18 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:16:44.442 16:28:18 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:16:44.442 16:28:18 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:16:44.442 16:28:18 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:44.442 16:28:18 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:44.442 16:28:18 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:16:44.442 16:28:18 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:16:44.442 16:28:18 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:16:44.700 No valid GPT data, bailing 00:16:44.700 16:28:18 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:16:44.700 16:28:18 -- scripts/common.sh@391 -- # pt= 00:16:44.700 16:28:18 -- scripts/common.sh@392 -- # return 1 00:16:44.700 16:28:18 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:16:44.700 16:28:18 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:16:44.700 16:28:18 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:44.700 16:28:18 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:16:44.700 16:28:18 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:16:44.700 16:28:18 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:44.700 16:28:18 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:44.700 16:28:18 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:16:44.700 16:28:18 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:16:44.700 16:28:18 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:44.700 No valid GPT data, bailing 00:16:44.700 16:28:18 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:44.700 16:28:18 -- scripts/common.sh@391 -- # pt= 00:16:44.700 16:28:18 -- scripts/common.sh@392 -- # return 1 00:16:44.700 16:28:18 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:16:44.700 16:28:18 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:16:44.700 16:28:18 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:44.700 16:28:18 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:44.700 16:28:18 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:16:44.700 16:28:18 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:16:44.700 16:28:18 -- nvmf/common.sh@656 -- # echo 1 00:16:44.700 16:28:18 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:16:44.700 16:28:18 -- nvmf/common.sh@658 -- # echo 1 00:16:44.700 16:28:18 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:16:44.700 16:28:18 -- nvmf/common.sh@661 -- # echo tcp 00:16:44.700 16:28:18 -- nvmf/common.sh@662 -- # echo 4420 00:16:44.700 16:28:18 -- nvmf/common.sh@663 -- # echo ipv4 00:16:44.700 16:28:18 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:16:44.700 16:28:18 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d --hostid=35bbb10f-fc38-42ac-b909-033700c5e05d -a 10.0.0.1 -t tcp -s 4420 00:16:44.700 00:16:44.700 Discovery Log Number of Records 2, Generation counter 2 00:16:44.700 =====Discovery Log Entry 0====== 00:16:44.700 trtype: tcp 00:16:44.700 adrfam: ipv4 00:16:44.701 subtype: current discovery subsystem 00:16:44.701 treq: not specified, sq flow control disable supported 00:16:44.701 portid: 1 00:16:44.701 trsvcid: 4420 00:16:44.701 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:44.701 traddr: 10.0.0.1 00:16:44.701 eflags: none 00:16:44.701 sectype: none 00:16:44.701 =====Discovery Log Entry 1====== 00:16:44.701 trtype: tcp 00:16:44.701 adrfam: ipv4 00:16:44.701 subtype: nvme subsystem 00:16:44.701 treq: not specified, sq flow control disable supported 00:16:44.701 portid: 1 00:16:44.701 trsvcid: 4420 00:16:44.701 subnqn: nqn.2016-06.io.spdk:testnqn 00:16:44.701 traddr: 10.0.0.1 00:16:44.701 eflags: none 00:16:44.701 sectype: none 00:16:44.701 16:28:18 -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:16:44.701 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:16:44.968 ===================================================== 00:16:44.968 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:16:44.968 ===================================================== 00:16:44.968 Controller Capabilities/Features 00:16:44.969 ================================ 00:16:44.969 Vendor ID: 0000 00:16:44.969 Subsystem Vendor ID: 0000 00:16:44.969 Serial Number: 912d7c761f41924eb787 00:16:44.969 Model Number: Linux 00:16:44.969 Firmware Version: 6.7.0-68 00:16:44.969 Recommended Arb Burst: 0 00:16:44.969 IEEE OUI Identifier: 00 00 00 00:16:44.969 Multi-path I/O 00:16:44.969 May have multiple subsystem ports: No 00:16:44.969 May have multiple controllers: No 00:16:44.969 Associated with SR-IOV VF: No 00:16:44.969 Max Data Transfer Size: Unlimited 00:16:44.969 Max Number of Namespaces: 0 00:16:44.969 Max Number of I/O Queues: 1024 00:16:44.969 NVMe Specification Version (VS): 1.3 00:16:44.969 NVMe Specification Version (Identify): 1.3 00:16:44.969 Maximum Queue Entries: 1024 00:16:44.969 Contiguous Queues Required: No 00:16:44.969 Arbitration Mechanisms Supported 00:16:44.969 Weighted Round Robin: Not Supported 00:16:44.969 Vendor Specific: Not Supported 00:16:44.969 Reset Timeout: 7500 ms 00:16:44.969 Doorbell Stride: 4 bytes 00:16:44.969 NVM Subsystem Reset: Not Supported 00:16:44.969 Command Sets Supported 00:16:44.969 NVM Command Set: Supported 00:16:44.969 Boot Partition: Not Supported 00:16:44.969 Memory Page Size Minimum: 4096 bytes 00:16:44.969 Memory Page Size Maximum: 4096 bytes 00:16:44.969 Persistent Memory Region: Not Supported 00:16:44.969 Optional Asynchronous Events Supported 00:16:44.969 Namespace Attribute Notices: Not Supported 00:16:44.969 Firmware Activation Notices: Not Supported 00:16:44.969 ANA Change Notices: Not Supported 00:16:44.969 PLE Aggregate Log Change Notices: Not Supported 00:16:44.969 LBA Status Info Alert Notices: Not Supported 00:16:44.969 EGE Aggregate Log Change Notices: Not Supported 00:16:44.969 Normal NVM Subsystem Shutdown event: Not Supported 00:16:44.969 Zone Descriptor Change Notices: Not Supported 00:16:44.969 Discovery Log Change Notices: Supported 00:16:44.969 Controller Attributes 00:16:44.969 128-bit Host Identifier: Not Supported 00:16:44.969 Non-Operational Permissive Mode: Not Supported 00:16:44.969 NVM Sets: Not Supported 00:16:44.969 Read Recovery Levels: Not Supported 00:16:44.969 Endurance Groups: Not Supported 00:16:44.969 Predictable Latency Mode: Not Supported 00:16:44.969 Traffic Based Keep ALive: Not Supported 00:16:44.969 Namespace Granularity: Not Supported 00:16:44.969 SQ Associations: Not Supported 00:16:44.969 UUID List: Not Supported 00:16:44.969 Multi-Domain Subsystem: Not Supported 00:16:44.969 Fixed Capacity Management: Not Supported 00:16:44.969 Variable Capacity Management: Not Supported 00:16:44.969 Delete Endurance Group: Not Supported 00:16:44.969 Delete NVM Set: Not Supported 00:16:44.969 Extended LBA Formats Supported: Not Supported 00:16:44.969 Flexible Data Placement Supported: Not Supported 00:16:44.969 00:16:44.969 Controller Memory Buffer Support 00:16:44.969 ================================ 00:16:44.969 Supported: No 00:16:44.969 00:16:44.969 Persistent Memory Region Support 00:16:44.969 ================================ 00:16:44.969 Supported: No 00:16:44.969 00:16:44.969 Admin Command Set Attributes 00:16:44.969 ============================ 00:16:44.969 Security Send/Receive: Not Supported 00:16:44.969 Format NVM: Not Supported 00:16:44.969 Firmware Activate/Download: Not Supported 00:16:44.969 Namespace Management: Not Supported 00:16:44.969 Device Self-Test: Not Supported 00:16:44.969 Directives: Not Supported 00:16:44.969 NVMe-MI: Not Supported 00:16:44.969 Virtualization Management: Not Supported 00:16:44.969 Doorbell Buffer Config: Not Supported 00:16:44.969 Get LBA Status Capability: Not Supported 00:16:44.969 Command & Feature Lockdown Capability: Not Supported 00:16:44.969 Abort Command Limit: 1 00:16:44.969 Async Event Request Limit: 1 00:16:44.969 Number of Firmware Slots: N/A 00:16:44.969 Firmware Slot 1 Read-Only: N/A 00:16:44.969 Firmware Activation Without Reset: N/A 00:16:44.969 Multiple Update Detection Support: N/A 00:16:44.969 Firmware Update Granularity: No Information Provided 00:16:44.969 Per-Namespace SMART Log: No 00:16:44.969 Asymmetric Namespace Access Log Page: Not Supported 00:16:44.969 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:16:44.969 Command Effects Log Page: Not Supported 00:16:44.969 Get Log Page Extended Data: Supported 00:16:44.969 Telemetry Log Pages: Not Supported 00:16:44.969 Persistent Event Log Pages: Not Supported 00:16:44.969 Supported Log Pages Log Page: May Support 00:16:44.969 Commands Supported & Effects Log Page: Not Supported 00:16:44.969 Feature Identifiers & Effects Log Page:May Support 00:16:44.969 NVMe-MI Commands & Effects Log Page: May Support 00:16:44.969 Data Area 4 for Telemetry Log: Not Supported 00:16:44.969 Error Log Page Entries Supported: 1 00:16:44.969 Keep Alive: Not Supported 00:16:44.969 00:16:44.969 NVM Command Set Attributes 00:16:44.969 ========================== 00:16:44.969 Submission Queue Entry Size 00:16:44.969 Max: 1 00:16:44.969 Min: 1 00:16:44.969 Completion Queue Entry Size 00:16:44.969 Max: 1 00:16:44.969 Min: 1 00:16:44.969 Number of Namespaces: 0 00:16:44.969 Compare Command: Not Supported 00:16:44.969 Write Uncorrectable Command: Not Supported 00:16:44.969 Dataset Management Command: Not Supported 00:16:44.969 Write Zeroes Command: Not Supported 00:16:44.969 Set Features Save Field: Not Supported 00:16:44.969 Reservations: Not Supported 00:16:44.969 Timestamp: Not Supported 00:16:44.969 Copy: Not Supported 00:16:44.969 Volatile Write Cache: Not Present 00:16:44.969 Atomic Write Unit (Normal): 1 00:16:44.969 Atomic Write Unit (PFail): 1 00:16:44.969 Atomic Compare & Write Unit: 1 00:16:44.969 Fused Compare & Write: Not Supported 00:16:44.969 Scatter-Gather List 00:16:44.969 SGL Command Set: Supported 00:16:44.969 SGL Keyed: Not Supported 00:16:44.969 SGL Bit Bucket Descriptor: Not Supported 00:16:44.969 SGL Metadata Pointer: Not Supported 00:16:44.969 Oversized SGL: Not Supported 00:16:44.969 SGL Metadata Address: Not Supported 00:16:44.969 SGL Offset: Supported 00:16:44.969 Transport SGL Data Block: Not Supported 00:16:44.969 Replay Protected Memory Block: Not Supported 00:16:44.969 00:16:44.969 Firmware Slot Information 00:16:44.969 ========================= 00:16:44.969 Active slot: 0 00:16:44.969 00:16:44.969 00:16:44.969 Error Log 00:16:44.969 ========= 00:16:44.969 00:16:44.969 Active Namespaces 00:16:44.969 ================= 00:16:44.969 Discovery Log Page 00:16:44.969 ================== 00:16:44.969 Generation Counter: 2 00:16:44.969 Number of Records: 2 00:16:44.969 Record Format: 0 00:16:44.969 00:16:44.969 Discovery Log Entry 0 00:16:44.969 ---------------------- 00:16:44.969 Transport Type: 3 (TCP) 00:16:44.969 Address Family: 1 (IPv4) 00:16:44.969 Subsystem Type: 3 (Current Discovery Subsystem) 00:16:44.969 Entry Flags: 00:16:44.969 Duplicate Returned Information: 0 00:16:44.969 Explicit Persistent Connection Support for Discovery: 0 00:16:44.969 Transport Requirements: 00:16:44.969 Secure Channel: Not Specified 00:16:44.969 Port ID: 1 (0x0001) 00:16:44.969 Controller ID: 65535 (0xffff) 00:16:44.969 Admin Max SQ Size: 32 00:16:44.969 Transport Service Identifier: 4420 00:16:44.969 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:16:44.969 Transport Address: 10.0.0.1 00:16:44.969 Discovery Log Entry 1 00:16:44.969 ---------------------- 00:16:44.969 Transport Type: 3 (TCP) 00:16:44.969 Address Family: 1 (IPv4) 00:16:44.969 Subsystem Type: 2 (NVM Subsystem) 00:16:44.969 Entry Flags: 00:16:44.969 Duplicate Returned Information: 0 00:16:44.969 Explicit Persistent Connection Support for Discovery: 0 00:16:44.969 Transport Requirements: 00:16:44.969 Secure Channel: Not Specified 00:16:44.969 Port ID: 1 (0x0001) 00:16:44.969 Controller ID: 65535 (0xffff) 00:16:44.970 Admin Max SQ Size: 32 00:16:44.970 Transport Service Identifier: 4420 00:16:44.970 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:16:44.970 Transport Address: 10.0.0.1 00:16:44.970 16:28:18 -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:16:44.970 get_feature(0x01) failed 00:16:44.970 get_feature(0x02) failed 00:16:44.970 get_feature(0x04) failed 00:16:44.970 ===================================================== 00:16:44.970 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:16:44.970 ===================================================== 00:16:44.970 Controller Capabilities/Features 00:16:44.970 ================================ 00:16:44.970 Vendor ID: 0000 00:16:44.970 Subsystem Vendor ID: 0000 00:16:44.970 Serial Number: 90bdf786aa3f2167f123 00:16:44.970 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:16:44.970 Firmware Version: 6.7.0-68 00:16:44.970 Recommended Arb Burst: 6 00:16:44.970 IEEE OUI Identifier: 00 00 00 00:16:44.970 Multi-path I/O 00:16:44.970 May have multiple subsystem ports: Yes 00:16:44.970 May have multiple controllers: Yes 00:16:44.970 Associated with SR-IOV VF: No 00:16:44.970 Max Data Transfer Size: Unlimited 00:16:44.970 Max Number of Namespaces: 1024 00:16:44.970 Max Number of I/O Queues: 128 00:16:44.970 NVMe Specification Version (VS): 1.3 00:16:44.970 NVMe Specification Version (Identify): 1.3 00:16:44.970 Maximum Queue Entries: 1024 00:16:44.970 Contiguous Queues Required: No 00:16:44.970 Arbitration Mechanisms Supported 00:16:44.970 Weighted Round Robin: Not Supported 00:16:44.970 Vendor Specific: Not Supported 00:16:44.970 Reset Timeout: 7500 ms 00:16:44.970 Doorbell Stride: 4 bytes 00:16:44.970 NVM Subsystem Reset: Not Supported 00:16:44.970 Command Sets Supported 00:16:44.970 NVM Command Set: Supported 00:16:44.970 Boot Partition: Not Supported 00:16:44.970 Memory Page Size Minimum: 4096 bytes 00:16:44.970 Memory Page Size Maximum: 4096 bytes 00:16:44.970 Persistent Memory Region: Not Supported 00:16:44.970 Optional Asynchronous Events Supported 00:16:44.970 Namespace Attribute Notices: Supported 00:16:44.970 Firmware Activation Notices: Not Supported 00:16:44.970 ANA Change Notices: Supported 00:16:44.970 PLE Aggregate Log Change Notices: Not Supported 00:16:44.970 LBA Status Info Alert Notices: Not Supported 00:16:44.970 EGE Aggregate Log Change Notices: Not Supported 00:16:44.970 Normal NVM Subsystem Shutdown event: Not Supported 00:16:44.970 Zone Descriptor Change Notices: Not Supported 00:16:44.970 Discovery Log Change Notices: Not Supported 00:16:44.970 Controller Attributes 00:16:44.970 128-bit Host Identifier: Supported 00:16:44.970 Non-Operational Permissive Mode: Not Supported 00:16:44.970 NVM Sets: Not Supported 00:16:44.970 Read Recovery Levels: Not Supported 00:16:44.970 Endurance Groups: Not Supported 00:16:44.970 Predictable Latency Mode: Not Supported 00:16:44.970 Traffic Based Keep ALive: Supported 00:16:44.970 Namespace Granularity: Not Supported 00:16:44.970 SQ Associations: Not Supported 00:16:44.970 UUID List: Not Supported 00:16:44.970 Multi-Domain Subsystem: Not Supported 00:16:44.970 Fixed Capacity Management: Not Supported 00:16:44.970 Variable Capacity Management: Not Supported 00:16:44.970 Delete Endurance Group: Not Supported 00:16:44.970 Delete NVM Set: Not Supported 00:16:44.970 Extended LBA Formats Supported: Not Supported 00:16:44.970 Flexible Data Placement Supported: Not Supported 00:16:44.970 00:16:44.970 Controller Memory Buffer Support 00:16:44.970 ================================ 00:16:44.970 Supported: No 00:16:44.970 00:16:44.970 Persistent Memory Region Support 00:16:44.970 ================================ 00:16:44.970 Supported: No 00:16:44.970 00:16:44.970 Admin Command Set Attributes 00:16:44.970 ============================ 00:16:44.970 Security Send/Receive: Not Supported 00:16:44.970 Format NVM: Not Supported 00:16:44.970 Firmware Activate/Download: Not Supported 00:16:44.970 Namespace Management: Not Supported 00:16:44.970 Device Self-Test: Not Supported 00:16:44.970 Directives: Not Supported 00:16:44.970 NVMe-MI: Not Supported 00:16:44.970 Virtualization Management: Not Supported 00:16:44.970 Doorbell Buffer Config: Not Supported 00:16:44.970 Get LBA Status Capability: Not Supported 00:16:44.970 Command & Feature Lockdown Capability: Not Supported 00:16:44.970 Abort Command Limit: 4 00:16:44.970 Async Event Request Limit: 4 00:16:44.970 Number of Firmware Slots: N/A 00:16:44.970 Firmware Slot 1 Read-Only: N/A 00:16:44.970 Firmware Activation Without Reset: N/A 00:16:44.970 Multiple Update Detection Support: N/A 00:16:44.970 Firmware Update Granularity: No Information Provided 00:16:44.970 Per-Namespace SMART Log: Yes 00:16:44.970 Asymmetric Namespace Access Log Page: Supported 00:16:44.970 ANA Transition Time : 10 sec 00:16:44.970 00:16:44.970 Asymmetric Namespace Access Capabilities 00:16:44.970 ANA Optimized State : Supported 00:16:44.970 ANA Non-Optimized State : Supported 00:16:44.970 ANA Inaccessible State : Supported 00:16:44.970 ANA Persistent Loss State : Supported 00:16:44.970 ANA Change State : Supported 00:16:44.970 ANAGRPID is not changed : No 00:16:44.970 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:16:44.970 00:16:44.970 ANA Group Identifier Maximum : 128 00:16:44.970 Number of ANA Group Identifiers : 128 00:16:44.970 Max Number of Allowed Namespaces : 1024 00:16:44.970 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:16:44.970 Command Effects Log Page: Supported 00:16:44.970 Get Log Page Extended Data: Supported 00:16:44.970 Telemetry Log Pages: Not Supported 00:16:44.970 Persistent Event Log Pages: Not Supported 00:16:44.970 Supported Log Pages Log Page: May Support 00:16:44.970 Commands Supported & Effects Log Page: Not Supported 00:16:44.970 Feature Identifiers & Effects Log Page:May Support 00:16:44.970 NVMe-MI Commands & Effects Log Page: May Support 00:16:44.970 Data Area 4 for Telemetry Log: Not Supported 00:16:44.970 Error Log Page Entries Supported: 128 00:16:44.970 Keep Alive: Supported 00:16:44.970 Keep Alive Granularity: 1000 ms 00:16:44.970 00:16:44.970 NVM Command Set Attributes 00:16:44.970 ========================== 00:16:44.970 Submission Queue Entry Size 00:16:44.970 Max: 64 00:16:44.970 Min: 64 00:16:44.970 Completion Queue Entry Size 00:16:44.970 Max: 16 00:16:44.970 Min: 16 00:16:44.970 Number of Namespaces: 1024 00:16:44.970 Compare Command: Not Supported 00:16:44.970 Write Uncorrectable Command: Not Supported 00:16:44.970 Dataset Management Command: Supported 00:16:44.970 Write Zeroes Command: Supported 00:16:44.970 Set Features Save Field: Not Supported 00:16:44.970 Reservations: Not Supported 00:16:44.970 Timestamp: Not Supported 00:16:44.970 Copy: Not Supported 00:16:44.970 Volatile Write Cache: Present 00:16:44.970 Atomic Write Unit (Normal): 1 00:16:44.970 Atomic Write Unit (PFail): 1 00:16:44.970 Atomic Compare & Write Unit: 1 00:16:44.970 Fused Compare & Write: Not Supported 00:16:44.970 Scatter-Gather List 00:16:44.970 SGL Command Set: Supported 00:16:44.970 SGL Keyed: Not Supported 00:16:44.971 SGL Bit Bucket Descriptor: Not Supported 00:16:44.971 SGL Metadata Pointer: Not Supported 00:16:44.971 Oversized SGL: Not Supported 00:16:44.971 SGL Metadata Address: Not Supported 00:16:44.971 SGL Offset: Supported 00:16:44.971 Transport SGL Data Block: Not Supported 00:16:44.971 Replay Protected Memory Block: Not Supported 00:16:44.971 00:16:44.971 Firmware Slot Information 00:16:44.971 ========================= 00:16:44.971 Active slot: 0 00:16:44.971 00:16:44.971 Asymmetric Namespace Access 00:16:44.971 =========================== 00:16:44.971 Change Count : 0 00:16:44.971 Number of ANA Group Descriptors : 1 00:16:44.971 ANA Group Descriptor : 0 00:16:44.971 ANA Group ID : 1 00:16:44.971 Number of NSID Values : 1 00:16:44.971 Change Count : 0 00:16:44.971 ANA State : 1 00:16:44.971 Namespace Identifier : 1 00:16:44.971 00:16:44.971 Commands Supported and Effects 00:16:44.971 ============================== 00:16:44.971 Admin Commands 00:16:44.971 -------------- 00:16:44.971 Get Log Page (02h): Supported 00:16:44.971 Identify (06h): Supported 00:16:44.971 Abort (08h): Supported 00:16:44.971 Set Features (09h): Supported 00:16:44.971 Get Features (0Ah): Supported 00:16:44.971 Asynchronous Event Request (0Ch): Supported 00:16:44.971 Keep Alive (18h): Supported 00:16:44.971 I/O Commands 00:16:44.971 ------------ 00:16:44.971 Flush (00h): Supported 00:16:44.971 Write (01h): Supported LBA-Change 00:16:44.971 Read (02h): Supported 00:16:44.971 Write Zeroes (08h): Supported LBA-Change 00:16:44.971 Dataset Management (09h): Supported 00:16:44.971 00:16:44.971 Error Log 00:16:44.971 ========= 00:16:44.971 Entry: 0 00:16:44.971 Error Count: 0x3 00:16:44.971 Submission Queue Id: 0x0 00:16:44.971 Command Id: 0x5 00:16:44.971 Phase Bit: 0 00:16:44.971 Status Code: 0x2 00:16:44.971 Status Code Type: 0x0 00:16:44.971 Do Not Retry: 1 00:16:44.971 Error Location: 0x28 00:16:44.971 LBA: 0x0 00:16:44.971 Namespace: 0x0 00:16:44.971 Vendor Log Page: 0x0 00:16:44.971 ----------- 00:16:44.971 Entry: 1 00:16:44.971 Error Count: 0x2 00:16:44.971 Submission Queue Id: 0x0 00:16:44.971 Command Id: 0x5 00:16:44.971 Phase Bit: 0 00:16:44.971 Status Code: 0x2 00:16:44.971 Status Code Type: 0x0 00:16:44.971 Do Not Retry: 1 00:16:44.971 Error Location: 0x28 00:16:44.971 LBA: 0x0 00:16:44.971 Namespace: 0x0 00:16:44.971 Vendor Log Page: 0x0 00:16:44.971 ----------- 00:16:44.971 Entry: 2 00:16:44.971 Error Count: 0x1 00:16:44.971 Submission Queue Id: 0x0 00:16:44.971 Command Id: 0x4 00:16:44.971 Phase Bit: 0 00:16:44.971 Status Code: 0x2 00:16:44.971 Status Code Type: 0x0 00:16:44.971 Do Not Retry: 1 00:16:44.971 Error Location: 0x28 00:16:44.971 LBA: 0x0 00:16:44.971 Namespace: 0x0 00:16:44.971 Vendor Log Page: 0x0 00:16:44.971 00:16:44.971 Number of Queues 00:16:44.971 ================ 00:16:44.971 Number of I/O Submission Queues: 128 00:16:44.971 Number of I/O Completion Queues: 128 00:16:44.971 00:16:44.971 ZNS Specific Controller Data 00:16:44.971 ============================ 00:16:44.971 Zone Append Size Limit: 0 00:16:44.971 00:16:44.971 00:16:44.971 Active Namespaces 00:16:44.971 ================= 00:16:44.971 get_feature(0x05) failed 00:16:44.971 Namespace ID:1 00:16:44.971 Command Set Identifier: NVM (00h) 00:16:44.971 Deallocate: Supported 00:16:44.971 Deallocated/Unwritten Error: Not Supported 00:16:44.971 Deallocated Read Value: Unknown 00:16:44.971 Deallocate in Write Zeroes: Not Supported 00:16:44.971 Deallocated Guard Field: 0xFFFF 00:16:44.971 Flush: Supported 00:16:44.971 Reservation: Not Supported 00:16:44.971 Namespace Sharing Capabilities: Multiple Controllers 00:16:44.971 Size (in LBAs): 1310720 (5GiB) 00:16:44.971 Capacity (in LBAs): 1310720 (5GiB) 00:16:44.971 Utilization (in LBAs): 1310720 (5GiB) 00:16:44.971 UUID: c4c08a82-dbde-47c6-9271-4f40db135937 00:16:44.971 Thin Provisioning: Not Supported 00:16:44.971 Per-NS Atomic Units: Yes 00:16:44.971 Atomic Boundary Size (Normal): 0 00:16:44.971 Atomic Boundary Size (PFail): 0 00:16:44.971 Atomic Boundary Offset: 0 00:16:44.971 NGUID/EUI64 Never Reused: No 00:16:44.971 ANA group ID: 1 00:16:44.971 Namespace Write Protected: No 00:16:44.971 Number of LBA Formats: 1 00:16:44.971 Current LBA Format: LBA Format #00 00:16:44.971 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:16:44.971 00:16:44.971 16:28:18 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:16:44.971 16:28:18 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:44.971 16:28:18 -- nvmf/common.sh@117 -- # sync 00:16:45.242 16:28:19 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:45.243 16:28:19 -- nvmf/common.sh@120 -- # set +e 00:16:45.243 16:28:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:45.243 16:28:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:45.243 rmmod nvme_tcp 00:16:45.243 rmmod nvme_fabrics 00:16:45.243 16:28:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:45.243 16:28:19 -- nvmf/common.sh@124 -- # set -e 00:16:45.243 16:28:19 -- nvmf/common.sh@125 -- # return 0 00:16:45.243 16:28:19 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:16:45.243 16:28:19 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:45.243 16:28:19 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:45.243 16:28:19 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:45.243 16:28:19 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:45.243 16:28:19 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:45.243 16:28:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.243 16:28:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:45.243 16:28:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.243 16:28:19 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:45.243 16:28:19 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:16:45.243 16:28:19 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:16:45.243 16:28:19 -- nvmf/common.sh@675 -- # echo 0 00:16:45.243 16:28:19 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:45.243 16:28:19 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:45.243 16:28:19 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:16:45.243 16:28:19 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:45.243 16:28:19 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:16:45.243 16:28:19 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:16:45.243 16:28:19 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:45.809 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:46.067 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:46.068 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:46.068 00:16:46.068 real 0m2.774s 00:16:46.068 user 0m0.911s 00:16:46.068 sys 0m1.344s 00:16:46.068 16:28:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:46.068 16:28:20 -- common/autotest_common.sh@10 -- # set +x 00:16:46.068 ************************************ 00:16:46.068 END TEST nvmf_identify_kernel_target 00:16:46.068 ************************************ 00:16:46.068 16:28:20 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:16:46.068 16:28:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:46.068 16:28:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:46.068 16:28:20 -- common/autotest_common.sh@10 -- # set +x 00:16:46.326 ************************************ 00:16:46.326 START TEST nvmf_auth 00:16:46.326 ************************************ 00:16:46.326 16:28:20 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:16:46.326 * Looking for test storage... 00:16:46.326 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:46.326 16:28:20 -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:46.326 16:28:20 -- nvmf/common.sh@7 -- # uname -s 00:16:46.326 16:28:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:46.326 16:28:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:46.326 16:28:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:46.326 16:28:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:46.326 16:28:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:46.326 16:28:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:46.326 16:28:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:46.326 16:28:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:46.326 16:28:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:46.326 16:28:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:46.326 16:28:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:16:46.326 16:28:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:16:46.326 16:28:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:46.326 16:28:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:46.326 16:28:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:46.326 16:28:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:46.326 16:28:20 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:46.326 16:28:20 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:46.327 16:28:20 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:46.327 16:28:20 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:46.327 16:28:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.327 16:28:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.327 16:28:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.327 16:28:20 -- paths/export.sh@5 -- # export PATH 00:16:46.327 16:28:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.327 16:28:20 -- nvmf/common.sh@47 -- # : 0 00:16:46.327 16:28:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:46.327 16:28:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:46.327 16:28:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:46.327 16:28:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:46.327 16:28:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:46.327 16:28:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:46.327 16:28:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:46.327 16:28:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:46.327 16:28:20 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:46.327 16:28:20 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:46.327 16:28:20 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:16:46.327 16:28:20 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:16:46.327 16:28:20 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:46.327 16:28:20 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:46.327 16:28:20 -- host/auth.sh@21 -- # keys=() 00:16:46.327 16:28:20 -- host/auth.sh@77 -- # nvmftestinit 00:16:46.327 16:28:20 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:46.327 16:28:20 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:46.327 16:28:20 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:46.327 16:28:20 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:46.327 16:28:20 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:46.327 16:28:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.327 16:28:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:46.327 16:28:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.327 16:28:20 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:16:46.327 16:28:20 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:16:46.327 16:28:20 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:16:46.327 16:28:20 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:16:46.327 16:28:20 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:16:46.327 16:28:20 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:16:46.327 16:28:20 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:46.327 16:28:20 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:46.327 16:28:20 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:46.327 16:28:20 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:46.327 16:28:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:46.327 16:28:20 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:46.327 16:28:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:46.327 16:28:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:46.327 16:28:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:46.327 16:28:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:46.327 16:28:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:46.327 16:28:20 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:46.327 16:28:20 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:46.327 16:28:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:46.327 Cannot find device "nvmf_tgt_br" 00:16:46.327 16:28:20 -- nvmf/common.sh@155 -- # true 00:16:46.327 16:28:20 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:46.327 Cannot find device "nvmf_tgt_br2" 00:16:46.327 16:28:20 -- nvmf/common.sh@156 -- # true 00:16:46.327 16:28:20 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:46.327 16:28:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:46.327 Cannot find device "nvmf_tgt_br" 00:16:46.327 16:28:20 -- nvmf/common.sh@158 -- # true 00:16:46.327 16:28:20 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:46.327 Cannot find device "nvmf_tgt_br2" 00:16:46.327 16:28:20 -- nvmf/common.sh@159 -- # true 00:16:46.327 16:28:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:46.327 16:28:20 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:46.586 16:28:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:46.586 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:46.586 16:28:20 -- nvmf/common.sh@162 -- # true 00:16:46.586 16:28:20 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:46.586 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:46.586 16:28:20 -- nvmf/common.sh@163 -- # true 00:16:46.586 16:28:20 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:46.586 16:28:20 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:46.586 16:28:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:46.586 16:28:20 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:46.586 16:28:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:46.586 16:28:20 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:46.586 16:28:20 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:46.586 16:28:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:46.586 16:28:20 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:46.586 16:28:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:46.586 16:28:20 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:46.586 16:28:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:46.586 16:28:20 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:46.586 16:28:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:46.586 16:28:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:46.586 16:28:20 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:46.586 16:28:20 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:46.586 16:28:20 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:46.586 16:28:20 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:46.586 16:28:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:46.586 16:28:20 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:46.586 16:28:20 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:46.586 16:28:20 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:46.586 16:28:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:46.586 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:46.586 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:16:46.586 00:16:46.586 --- 10.0.0.2 ping statistics --- 00:16:46.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.586 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:16:46.586 16:28:20 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:46.586 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:46.586 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:16:46.586 00:16:46.586 --- 10.0.0.3 ping statistics --- 00:16:46.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.586 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:16:46.586 16:28:20 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:46.586 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:46.586 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:16:46.586 00:16:46.586 --- 10.0.0.1 ping statistics --- 00:16:46.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.586 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:16:46.586 16:28:20 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:46.586 16:28:20 -- nvmf/common.sh@422 -- # return 0 00:16:46.586 16:28:20 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:46.586 16:28:20 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:46.586 16:28:20 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:46.586 16:28:20 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:46.586 16:28:20 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:46.586 16:28:20 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:46.586 16:28:20 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:46.586 16:28:20 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:16:46.844 16:28:20 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:46.844 16:28:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:46.844 16:28:20 -- common/autotest_common.sh@10 -- # set +x 00:16:46.844 16:28:20 -- nvmf/common.sh@470 -- # nvmfpid=83684 00:16:46.844 16:28:20 -- nvmf/common.sh@471 -- # waitforlisten 83684 00:16:46.844 16:28:20 -- common/autotest_common.sh@817 -- # '[' -z 83684 ']' 00:16:46.844 16:28:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.844 16:28:20 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:16:46.844 16:28:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:46.844 16:28:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.844 16:28:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:46.844 16:28:20 -- common/autotest_common.sh@10 -- # set +x 00:16:47.777 16:28:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:47.777 16:28:21 -- common/autotest_common.sh@850 -- # return 0 00:16:47.777 16:28:21 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:47.777 16:28:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:47.777 16:28:21 -- common/autotest_common.sh@10 -- # set +x 00:16:47.777 16:28:21 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:47.777 16:28:21 -- host/auth.sh@79 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:16:47.777 16:28:21 -- host/auth.sh@81 -- # gen_key null 32 00:16:47.777 16:28:21 -- host/auth.sh@53 -- # local digest len file key 00:16:47.777 16:28:21 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:47.777 16:28:21 -- host/auth.sh@54 -- # local -A digests 00:16:47.777 16:28:21 -- host/auth.sh@56 -- # digest=null 00:16:47.777 16:28:21 -- host/auth.sh@56 -- # len=32 00:16:47.777 16:28:21 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:47.777 16:28:21 -- host/auth.sh@57 -- # key=0702b8a349a790898628dde04e403dac 00:16:47.777 16:28:21 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:16:47.777 16:28:21 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.mi0 00:16:47.777 16:28:21 -- host/auth.sh@59 -- # format_dhchap_key 0702b8a349a790898628dde04e403dac 0 00:16:47.777 16:28:21 -- nvmf/common.sh@708 -- # format_key DHHC-1 0702b8a349a790898628dde04e403dac 0 00:16:47.777 16:28:21 -- nvmf/common.sh@691 -- # local prefix key digest 00:16:47.777 16:28:21 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:16:47.777 16:28:21 -- nvmf/common.sh@693 -- # key=0702b8a349a790898628dde04e403dac 00:16:47.777 16:28:21 -- nvmf/common.sh@693 -- # digest=0 00:16:47.777 16:28:21 -- nvmf/common.sh@694 -- # python - 00:16:47.777 16:28:21 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.mi0 00:16:47.777 16:28:21 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.mi0 00:16:47.777 16:28:21 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.mi0 00:16:47.777 16:28:21 -- host/auth.sh@82 -- # gen_key null 48 00:16:47.777 16:28:21 -- host/auth.sh@53 -- # local digest len file key 00:16:47.777 16:28:21 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:47.777 16:28:21 -- host/auth.sh@54 -- # local -A digests 00:16:47.777 16:28:21 -- host/auth.sh@56 -- # digest=null 00:16:47.777 16:28:21 -- host/auth.sh@56 -- # len=48 00:16:47.777 16:28:21 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:47.777 16:28:21 -- host/auth.sh@57 -- # key=c01edd9dfac17d8f4ca4a65b1604972fd264b5355e73d6da 00:16:47.777 16:28:21 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:16:47.777 16:28:21 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.Y8L 00:16:47.777 16:28:21 -- host/auth.sh@59 -- # format_dhchap_key c01edd9dfac17d8f4ca4a65b1604972fd264b5355e73d6da 0 00:16:47.777 16:28:21 -- nvmf/common.sh@708 -- # format_key DHHC-1 c01edd9dfac17d8f4ca4a65b1604972fd264b5355e73d6da 0 00:16:47.777 16:28:21 -- nvmf/common.sh@691 -- # local prefix key digest 00:16:47.777 16:28:21 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:16:47.777 16:28:21 -- nvmf/common.sh@693 -- # key=c01edd9dfac17d8f4ca4a65b1604972fd264b5355e73d6da 00:16:47.777 16:28:21 -- nvmf/common.sh@693 -- # digest=0 00:16:47.777 16:28:21 -- nvmf/common.sh@694 -- # python - 00:16:48.036 16:28:21 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.Y8L 00:16:48.036 16:28:21 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.Y8L 00:16:48.036 16:28:21 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.Y8L 00:16:48.036 16:28:21 -- host/auth.sh@83 -- # gen_key sha256 32 00:16:48.036 16:28:21 -- host/auth.sh@53 -- # local digest len file key 00:16:48.036 16:28:21 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:48.036 16:28:21 -- host/auth.sh@54 -- # local -A digests 00:16:48.036 16:28:21 -- host/auth.sh@56 -- # digest=sha256 00:16:48.036 16:28:21 -- host/auth.sh@56 -- # len=32 00:16:48.036 16:28:21 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:48.036 16:28:21 -- host/auth.sh@57 -- # key=c4ebc07d9d685adf10c67254e4c8fbe8 00:16:48.036 16:28:21 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:16:48.036 16:28:21 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.L5c 00:16:48.036 16:28:21 -- host/auth.sh@59 -- # format_dhchap_key c4ebc07d9d685adf10c67254e4c8fbe8 1 00:16:48.036 16:28:21 -- nvmf/common.sh@708 -- # format_key DHHC-1 c4ebc07d9d685adf10c67254e4c8fbe8 1 00:16:48.036 16:28:21 -- nvmf/common.sh@691 -- # local prefix key digest 00:16:48.036 16:28:21 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:16:48.036 16:28:21 -- nvmf/common.sh@693 -- # key=c4ebc07d9d685adf10c67254e4c8fbe8 00:16:48.036 16:28:21 -- nvmf/common.sh@693 -- # digest=1 00:16:48.036 16:28:21 -- nvmf/common.sh@694 -- # python - 00:16:48.036 16:28:21 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.L5c 00:16:48.036 16:28:21 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.L5c 00:16:48.036 16:28:21 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.L5c 00:16:48.036 16:28:21 -- host/auth.sh@84 -- # gen_key sha384 48 00:16:48.036 16:28:21 -- host/auth.sh@53 -- # local digest len file key 00:16:48.036 16:28:21 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:48.036 16:28:21 -- host/auth.sh@54 -- # local -A digests 00:16:48.036 16:28:21 -- host/auth.sh@56 -- # digest=sha384 00:16:48.036 16:28:21 -- host/auth.sh@56 -- # len=48 00:16:48.036 16:28:21 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:48.036 16:28:21 -- host/auth.sh@57 -- # key=16b0c06f03cdb3a8a36b817628957a6535fda1c7cb9359ff 00:16:48.036 16:28:21 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:16:48.036 16:28:21 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.247 00:16:48.036 16:28:21 -- host/auth.sh@59 -- # format_dhchap_key 16b0c06f03cdb3a8a36b817628957a6535fda1c7cb9359ff 2 00:16:48.036 16:28:21 -- nvmf/common.sh@708 -- # format_key DHHC-1 16b0c06f03cdb3a8a36b817628957a6535fda1c7cb9359ff 2 00:16:48.036 16:28:21 -- nvmf/common.sh@691 -- # local prefix key digest 00:16:48.036 16:28:21 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:16:48.036 16:28:21 -- nvmf/common.sh@693 -- # key=16b0c06f03cdb3a8a36b817628957a6535fda1c7cb9359ff 00:16:48.036 16:28:21 -- nvmf/common.sh@693 -- # digest=2 00:16:48.036 16:28:21 -- nvmf/common.sh@694 -- # python - 00:16:48.036 16:28:21 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.247 00:16:48.036 16:28:21 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.247 00:16:48.036 16:28:21 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.247 00:16:48.036 16:28:21 -- host/auth.sh@85 -- # gen_key sha512 64 00:16:48.036 16:28:21 -- host/auth.sh@53 -- # local digest len file key 00:16:48.036 16:28:21 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:48.036 16:28:21 -- host/auth.sh@54 -- # local -A digests 00:16:48.036 16:28:21 -- host/auth.sh@56 -- # digest=sha512 00:16:48.036 16:28:21 -- host/auth.sh@56 -- # len=64 00:16:48.036 16:28:21 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:48.036 16:28:21 -- host/auth.sh@57 -- # key=f0c9d0b4693f494f8734b0451cdc6686949bbbd5c564160df426f427de4221e3 00:16:48.036 16:28:21 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:16:48.036 16:28:21 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.B5L 00:16:48.036 16:28:21 -- host/auth.sh@59 -- # format_dhchap_key f0c9d0b4693f494f8734b0451cdc6686949bbbd5c564160df426f427de4221e3 3 00:16:48.036 16:28:21 -- nvmf/common.sh@708 -- # format_key DHHC-1 f0c9d0b4693f494f8734b0451cdc6686949bbbd5c564160df426f427de4221e3 3 00:16:48.036 16:28:21 -- nvmf/common.sh@691 -- # local prefix key digest 00:16:48.036 16:28:21 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:16:48.036 16:28:21 -- nvmf/common.sh@693 -- # key=f0c9d0b4693f494f8734b0451cdc6686949bbbd5c564160df426f427de4221e3 00:16:48.036 16:28:21 -- nvmf/common.sh@693 -- # digest=3 00:16:48.036 16:28:21 -- nvmf/common.sh@694 -- # python - 00:16:48.036 16:28:22 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.B5L 00:16:48.036 16:28:22 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.B5L 00:16:48.036 16:28:22 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.B5L 00:16:48.036 16:28:22 -- host/auth.sh@87 -- # waitforlisten 83684 00:16:48.036 16:28:22 -- common/autotest_common.sh@817 -- # '[' -z 83684 ']' 00:16:48.036 16:28:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.036 16:28:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:48.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.036 16:28:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.036 16:28:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:48.036 16:28:22 -- common/autotest_common.sh@10 -- # set +x 00:16:48.294 16:28:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:48.294 16:28:22 -- common/autotest_common.sh@850 -- # return 0 00:16:48.294 16:28:22 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:16:48.294 16:28:22 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.mi0 00:16:48.294 16:28:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:48.294 16:28:22 -- common/autotest_common.sh@10 -- # set +x 00:16:48.294 16:28:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:48.294 16:28:22 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:16:48.294 16:28:22 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Y8L 00:16:48.294 16:28:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:48.294 16:28:22 -- common/autotest_common.sh@10 -- # set +x 00:16:48.294 16:28:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:48.294 16:28:22 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:16:48.294 16:28:22 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.L5c 00:16:48.294 16:28:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:48.294 16:28:22 -- common/autotest_common.sh@10 -- # set +x 00:16:48.294 16:28:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:48.294 16:28:22 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:16:48.294 16:28:22 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.247 00:16:48.294 16:28:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:48.294 16:28:22 -- common/autotest_common.sh@10 -- # set +x 00:16:48.294 16:28:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:48.294 16:28:22 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:16:48.294 16:28:22 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.B5L 00:16:48.294 16:28:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:48.294 16:28:22 -- common/autotest_common.sh@10 -- # set +x 00:16:48.294 16:28:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:48.294 16:28:22 -- host/auth.sh@92 -- # nvmet_auth_init 00:16:48.294 16:28:22 -- host/auth.sh@35 -- # get_main_ns_ip 00:16:48.294 16:28:22 -- nvmf/common.sh@717 -- # local ip 00:16:48.294 16:28:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:48.294 16:28:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:48.294 16:28:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:48.294 16:28:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:48.294 16:28:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:48.294 16:28:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:48.294 16:28:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:48.294 16:28:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:48.294 16:28:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:48.294 16:28:22 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:16:48.294 16:28:22 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:16:48.294 16:28:22 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:16:48.294 16:28:22 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:48.294 16:28:22 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:48.294 16:28:22 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:16:48.294 16:28:22 -- nvmf/common.sh@628 -- # local block nvme 00:16:48.294 16:28:22 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:16:48.294 16:28:22 -- nvmf/common.sh@631 -- # modprobe nvmet 00:16:48.552 16:28:22 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:16:48.552 16:28:22 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:48.810 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:48.810 Waiting for block devices as requested 00:16:48.810 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:48.810 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:49.744 16:28:23 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:16:49.744 16:28:23 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:49.744 16:28:23 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:16:49.744 16:28:23 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:16:49.744 16:28:23 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:49.744 16:28:23 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:49.744 16:28:23 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:16:49.744 16:28:23 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:16:49.744 16:28:23 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:49.744 No valid GPT data, bailing 00:16:49.744 16:28:23 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:49.744 16:28:23 -- scripts/common.sh@391 -- # pt= 00:16:49.744 16:28:23 -- scripts/common.sh@392 -- # return 1 00:16:49.744 16:28:23 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:16:49.744 16:28:23 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:16:49.744 16:28:23 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:16:49.744 16:28:23 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:16:49.744 16:28:23 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:16:49.744 16:28:23 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:49.744 16:28:23 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:49.744 16:28:23 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:16:49.744 16:28:23 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:16:49.744 16:28:23 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:16:49.744 No valid GPT data, bailing 00:16:49.744 16:28:23 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:16:49.744 16:28:23 -- scripts/common.sh@391 -- # pt= 00:16:49.744 16:28:23 -- scripts/common.sh@392 -- # return 1 00:16:49.744 16:28:23 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:16:49.744 16:28:23 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:16:49.744 16:28:23 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:16:49.744 16:28:23 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:16:49.744 16:28:23 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:16:49.744 16:28:23 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:49.744 16:28:23 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:49.744 16:28:23 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:16:49.744 16:28:23 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:16:49.744 16:28:23 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:16:49.744 No valid GPT data, bailing 00:16:49.744 16:28:23 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:16:49.744 16:28:23 -- scripts/common.sh@391 -- # pt= 00:16:49.744 16:28:23 -- scripts/common.sh@392 -- # return 1 00:16:49.744 16:28:23 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:16:49.744 16:28:23 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:16:49.744 16:28:23 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:49.744 16:28:23 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:16:49.744 16:28:23 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:16:49.744 16:28:23 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:49.744 16:28:23 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:49.744 16:28:23 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:16:49.744 16:28:23 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:16:49.744 16:28:23 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:49.744 No valid GPT data, bailing 00:16:49.744 16:28:23 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:49.744 16:28:23 -- scripts/common.sh@391 -- # pt= 00:16:49.744 16:28:23 -- scripts/common.sh@392 -- # return 1 00:16:49.744 16:28:23 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:16:49.744 16:28:23 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:16:49.744 16:28:23 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:49.744 16:28:23 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:49.744 16:28:23 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:16:49.744 16:28:23 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:16:49.744 16:28:23 -- nvmf/common.sh@656 -- # echo 1 00:16:49.744 16:28:23 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:16:49.744 16:28:23 -- nvmf/common.sh@658 -- # echo 1 00:16:49.744 16:28:23 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:16:49.744 16:28:23 -- nvmf/common.sh@661 -- # echo tcp 00:16:49.744 16:28:23 -- nvmf/common.sh@662 -- # echo 4420 00:16:49.744 16:28:23 -- nvmf/common.sh@663 -- # echo ipv4 00:16:49.744 16:28:23 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:16:49.744 16:28:23 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d --hostid=35bbb10f-fc38-42ac-b909-033700c5e05d -a 10.0.0.1 -t tcp -s 4420 00:16:49.744 00:16:49.744 Discovery Log Number of Records 2, Generation counter 2 00:16:49.744 =====Discovery Log Entry 0====== 00:16:49.744 trtype: tcp 00:16:49.744 adrfam: ipv4 00:16:49.744 subtype: current discovery subsystem 00:16:49.744 treq: not specified, sq flow control disable supported 00:16:49.744 portid: 1 00:16:49.744 trsvcid: 4420 00:16:49.744 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:49.744 traddr: 10.0.0.1 00:16:49.744 eflags: none 00:16:49.744 sectype: none 00:16:49.744 =====Discovery Log Entry 1====== 00:16:49.744 trtype: tcp 00:16:49.744 adrfam: ipv4 00:16:49.744 subtype: nvme subsystem 00:16:49.744 treq: not specified, sq flow control disable supported 00:16:49.744 portid: 1 00:16:49.744 trsvcid: 4420 00:16:49.744 subnqn: nqn.2024-02.io.spdk:cnode0 00:16:49.744 traddr: 10.0.0.1 00:16:49.744 eflags: none 00:16:49.744 sectype: none 00:16:49.744 16:28:23 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:49.744 16:28:23 -- host/auth.sh@37 -- # echo 0 00:16:49.744 16:28:23 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:16:49.744 16:28:23 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:49.744 16:28:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:49.744 16:28:23 -- host/auth.sh@44 -- # digest=sha256 00:16:49.744 16:28:23 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:49.744 16:28:23 -- host/auth.sh@44 -- # keyid=1 00:16:49.744 16:28:23 -- host/auth.sh@45 -- # key=DHHC-1:00:YzAxZWRkOWRmYWMxN2Q4ZjRjYTRhNjViMTYwNDk3MmZkMjY0YjUzNTVlNzNkNmRhPQxEjg==: 00:16:49.744 16:28:23 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:49.744 16:28:23 -- host/auth.sh@48 -- # echo ffdhe2048 00:16:50.003 16:28:23 -- host/auth.sh@49 -- # echo DHHC-1:00:YzAxZWRkOWRmYWMxN2Q4ZjRjYTRhNjViMTYwNDk3MmZkMjY0YjUzNTVlNzNkNmRhPQxEjg==: 00:16:50.003 16:28:23 -- host/auth.sh@100 -- # IFS=, 00:16:50.003 16:28:23 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:16:50.003 16:28:23 -- host/auth.sh@100 -- # IFS=, 00:16:50.003 16:28:23 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:50.003 16:28:23 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:16:50.003 16:28:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:50.003 16:28:23 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:16:50.003 16:28:23 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:50.003 16:28:23 -- host/auth.sh@68 -- # keyid=1 00:16:50.003 16:28:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:50.003 16:28:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.003 16:28:23 -- common/autotest_common.sh@10 -- # set +x 00:16:50.003 16:28:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.003 16:28:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:50.003 16:28:23 -- nvmf/common.sh@717 -- # local ip 00:16:50.003 16:28:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:50.003 16:28:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:50.003 16:28:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.003 16:28:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.003 16:28:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:50.003 16:28:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.003 16:28:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:50.003 16:28:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:50.003 16:28:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:50.003 16:28:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:16:50.003 16:28:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.003 16:28:23 -- common/autotest_common.sh@10 -- # set +x 00:16:50.003 nvme0n1 00:16:50.003 16:28:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.003 16:28:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.003 16:28:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.003 16:28:24 -- common/autotest_common.sh@10 -- # set +x 00:16:50.003 16:28:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:50.003 16:28:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.261 16:28:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.261 16:28:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.261 16:28:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.261 16:28:24 -- common/autotest_common.sh@10 -- # set +x 00:16:50.261 16:28:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.261 16:28:24 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:16:50.261 16:28:24 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:16:50.261 16:28:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:50.261 16:28:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:16:50.261 16:28:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:50.261 16:28:24 -- host/auth.sh@44 -- # digest=sha256 00:16:50.261 16:28:24 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:50.261 16:28:24 -- host/auth.sh@44 -- # keyid=0 00:16:50.261 16:28:24 -- host/auth.sh@45 -- # key=DHHC-1:00:MDcwMmI4YTM0OWE3OTA4OTg2MjhkZGUwNGU0MDNkYWMHRk0g: 00:16:50.261 16:28:24 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:50.261 16:28:24 -- host/auth.sh@48 -- # echo ffdhe2048 00:16:50.261 16:28:24 -- host/auth.sh@49 -- # echo DHHC-1:00:MDcwMmI4YTM0OWE3OTA4OTg2MjhkZGUwNGU0MDNkYWMHRk0g: 00:16:50.261 16:28:24 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:16:50.261 16:28:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:50.261 16:28:24 -- host/auth.sh@68 -- # digest=sha256 00:16:50.261 16:28:24 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:16:50.261 16:28:24 -- host/auth.sh@68 -- # keyid=0 00:16:50.261 16:28:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:50.261 16:28:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.261 16:28:24 -- common/autotest_common.sh@10 -- # set +x 00:16:50.261 16:28:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.261 16:28:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:50.261 16:28:24 -- nvmf/common.sh@717 -- # local ip 00:16:50.261 16:28:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:50.261 16:28:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:50.261 16:28:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.261 16:28:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.261 16:28:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:50.261 16:28:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.261 16:28:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:50.261 16:28:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:50.261 16:28:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:50.262 16:28:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:16:50.262 16:28:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.262 16:28:24 -- common/autotest_common.sh@10 -- # set +x 00:16:50.262 nvme0n1 00:16:50.262 16:28:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.262 16:28:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.262 16:28:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:50.262 16:28:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.262 16:28:24 -- common/autotest_common.sh@10 -- # set +x 00:16:50.262 16:28:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.262 16:28:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.262 16:28:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.262 16:28:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.262 16:28:24 -- common/autotest_common.sh@10 -- # set +x 00:16:50.262 16:28:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.262 16:28:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:50.262 16:28:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:50.262 16:28:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:50.262 16:28:24 -- host/auth.sh@44 -- # digest=sha256 00:16:50.262 16:28:24 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:50.262 16:28:24 -- host/auth.sh@44 -- # keyid=1 00:16:50.262 16:28:24 -- host/auth.sh@45 -- # key=DHHC-1:00:YzAxZWRkOWRmYWMxN2Q4ZjRjYTRhNjViMTYwNDk3MmZkMjY0YjUzNTVlNzNkNmRhPQxEjg==: 00:16:50.262 16:28:24 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:50.262 16:28:24 -- host/auth.sh@48 -- # echo ffdhe2048 00:16:50.262 16:28:24 -- host/auth.sh@49 -- # echo DHHC-1:00:YzAxZWRkOWRmYWMxN2Q4ZjRjYTRhNjViMTYwNDk3MmZkMjY0YjUzNTVlNzNkNmRhPQxEjg==: 00:16:50.262 16:28:24 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:16:50.262 16:28:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:50.262 16:28:24 -- host/auth.sh@68 -- # digest=sha256 00:16:50.262 16:28:24 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:16:50.262 16:28:24 -- host/auth.sh@68 -- # keyid=1 00:16:50.262 16:28:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:50.262 16:28:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.262 16:28:24 -- common/autotest_common.sh@10 -- # set +x 00:16:50.262 16:28:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.262 16:28:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:50.262 16:28:24 -- nvmf/common.sh@717 -- # local ip 00:16:50.262 16:28:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:50.262 16:28:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:50.262 16:28:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.262 16:28:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.262 16:28:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:50.262 16:28:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.262 16:28:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:50.262 16:28:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:50.262 16:28:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:50.262 16:28:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:16:50.262 16:28:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.262 16:28:24 -- common/autotest_common.sh@10 -- # set +x 00:16:50.520 nvme0n1 00:16:50.520 16:28:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.520 16:28:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.520 16:28:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.520 16:28:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:50.520 16:28:24 -- common/autotest_common.sh@10 -- # set +x 00:16:50.520 16:28:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.520 16:28:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.520 16:28:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.520 16:28:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.520 16:28:24 -- common/autotest_common.sh@10 -- # set +x 00:16:50.520 16:28:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.520 16:28:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:50.520 16:28:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:16:50.520 16:28:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:50.520 16:28:24 -- host/auth.sh@44 -- # digest=sha256 00:16:50.520 16:28:24 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:50.520 16:28:24 -- host/auth.sh@44 -- # keyid=2 00:16:50.520 16:28:24 -- host/auth.sh@45 -- # key=DHHC-1:01:YzRlYmMwN2Q5ZDY4NWFkZjEwYzY3MjU0ZTRjOGZiZTjeXngU: 00:16:50.520 16:28:24 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:50.520 16:28:24 -- host/auth.sh@48 -- # echo ffdhe2048 00:16:50.520 16:28:24 -- host/auth.sh@49 -- # echo DHHC-1:01:YzRlYmMwN2Q5ZDY4NWFkZjEwYzY3MjU0ZTRjOGZiZTjeXngU: 00:16:50.520 16:28:24 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:16:50.520 16:28:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:50.520 16:28:24 -- host/auth.sh@68 -- # digest=sha256 00:16:50.520 16:28:24 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:16:50.520 16:28:24 -- host/auth.sh@68 -- # keyid=2 00:16:50.520 16:28:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:50.520 16:28:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.520 16:28:24 -- common/autotest_common.sh@10 -- # set +x 00:16:50.520 16:28:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.520 16:28:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:50.520 16:28:24 -- nvmf/common.sh@717 -- # local ip 00:16:50.520 16:28:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:50.520 16:28:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:50.520 16:28:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.520 16:28:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.520 16:28:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:50.520 16:28:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.520 16:28:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:50.520 16:28:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:50.520 16:28:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:50.520 16:28:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:50.520 16:28:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.520 16:28:24 -- common/autotest_common.sh@10 -- # set +x 00:16:50.520 nvme0n1 00:16:50.520 16:28:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.520 16:28:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.520 16:28:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:50.520 16:28:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.520 16:28:24 -- common/autotest_common.sh@10 -- # set +x 00:16:50.520 16:28:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.778 16:28:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.778 16:28:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.778 16:28:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.778 16:28:24 -- common/autotest_common.sh@10 -- # set +x 00:16:50.778 16:28:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.778 16:28:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:50.778 16:28:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:16:50.778 16:28:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:50.778 16:28:24 -- host/auth.sh@44 -- # digest=sha256 00:16:50.778 16:28:24 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:50.778 16:28:24 -- host/auth.sh@44 -- # keyid=3 00:16:50.778 16:28:24 -- host/auth.sh@45 -- # key=DHHC-1:02:MTZiMGMwNmYwM2NkYjNhOGEzNmI4MTc2Mjg5NTdhNjUzNWZkYTFjN2NiOTM1OWZmpX7ncA==: 00:16:50.778 16:28:24 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:50.778 16:28:24 -- host/auth.sh@48 -- # echo ffdhe2048 00:16:50.778 16:28:24 -- host/auth.sh@49 -- # echo DHHC-1:02:MTZiMGMwNmYwM2NkYjNhOGEzNmI4MTc2Mjg5NTdhNjUzNWZkYTFjN2NiOTM1OWZmpX7ncA==: 00:16:50.778 16:28:24 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:16:50.778 16:28:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:50.778 16:28:24 -- host/auth.sh@68 -- # digest=sha256 00:16:50.778 16:28:24 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:16:50.778 16:28:24 -- host/auth.sh@68 -- # keyid=3 00:16:50.778 16:28:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:50.778 16:28:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.778 16:28:24 -- common/autotest_common.sh@10 -- # set +x 00:16:50.778 16:28:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.778 16:28:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:50.778 16:28:24 -- nvmf/common.sh@717 -- # local ip 00:16:50.778 16:28:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:50.778 16:28:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:50.778 16:28:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.778 16:28:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.778 16:28:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:50.778 16:28:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.778 16:28:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:50.778 16:28:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:50.778 16:28:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:50.778 16:28:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:16:50.778 16:28:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.778 16:28:24 -- common/autotest_common.sh@10 -- # set +x 00:16:50.778 nvme0n1 00:16:50.778 16:28:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.778 16:28:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.778 16:28:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:50.778 16:28:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.778 16:28:24 -- common/autotest_common.sh@10 -- # set +x 00:16:50.778 16:28:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.778 16:28:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.778 16:28:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.778 16:28:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.778 16:28:24 -- common/autotest_common.sh@10 -- # set +x 00:16:50.778 16:28:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.778 16:28:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:50.778 16:28:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:16:50.778 16:28:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:50.778 16:28:24 -- host/auth.sh@44 -- # digest=sha256 00:16:50.778 16:28:24 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:50.778 16:28:24 -- host/auth.sh@44 -- # keyid=4 00:16:50.778 16:28:24 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjBjOWQwYjQ2OTNmNDk0Zjg3MzRiMDQ1MWNkYzY2ODY5NDliYmJkNWM1NjQxNjBkZjQyNmY0MjdkZTQyMjFlM6Od0wk=: 00:16:50.778 16:28:24 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:50.778 16:28:24 -- host/auth.sh@48 -- # echo ffdhe2048 00:16:50.778 16:28:24 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjBjOWQwYjQ2OTNmNDk0Zjg3MzRiMDQ1MWNkYzY2ODY5NDliYmJkNWM1NjQxNjBkZjQyNmY0MjdkZTQyMjFlM6Od0wk=: 00:16:50.778 16:28:24 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:16:50.778 16:28:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:50.778 16:28:24 -- host/auth.sh@68 -- # digest=sha256 00:16:50.778 16:28:24 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:16:50.778 16:28:24 -- host/auth.sh@68 -- # keyid=4 00:16:50.778 16:28:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:50.778 16:28:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.778 16:28:24 -- common/autotest_common.sh@10 -- # set +x 00:16:50.779 16:28:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.779 16:28:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:50.779 16:28:24 -- nvmf/common.sh@717 -- # local ip 00:16:50.779 16:28:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:50.779 16:28:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:50.779 16:28:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.779 16:28:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.779 16:28:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:50.779 16:28:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.779 16:28:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:50.779 16:28:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:50.779 16:28:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:50.779 16:28:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:50.779 16:28:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.779 16:28:24 -- common/autotest_common.sh@10 -- # set +x 00:16:51.037 nvme0n1 00:16:51.037 16:28:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.037 16:28:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:51.037 16:28:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:51.037 16:28:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.037 16:28:24 -- common/autotest_common.sh@10 -- # set +x 00:16:51.037 16:28:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.037 16:28:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.037 16:28:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:51.037 16:28:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.037 16:28:24 -- common/autotest_common.sh@10 -- # set +x 00:16:51.037 16:28:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.037 16:28:24 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:16:51.037 16:28:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:51.037 16:28:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:16:51.037 16:28:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:51.037 16:28:24 -- host/auth.sh@44 -- # digest=sha256 00:16:51.037 16:28:24 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:51.037 16:28:24 -- host/auth.sh@44 -- # keyid=0 00:16:51.037 16:28:24 -- host/auth.sh@45 -- # key=DHHC-1:00:MDcwMmI4YTM0OWE3OTA4OTg2MjhkZGUwNGU0MDNkYWMHRk0g: 00:16:51.037 16:28:24 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:51.037 16:28:24 -- host/auth.sh@48 -- # echo ffdhe3072 00:16:51.295 16:28:25 -- host/auth.sh@49 -- # echo DHHC-1:00:MDcwMmI4YTM0OWE3OTA4OTg2MjhkZGUwNGU0MDNkYWMHRk0g: 00:16:51.295 16:28:25 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:16:51.295 16:28:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:51.295 16:28:25 -- host/auth.sh@68 -- # digest=sha256 00:16:51.295 16:28:25 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:16:51.295 16:28:25 -- host/auth.sh@68 -- # keyid=0 00:16:51.295 16:28:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:51.295 16:28:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.295 16:28:25 -- common/autotest_common.sh@10 -- # set +x 00:16:51.295 16:28:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.295 16:28:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:51.295 16:28:25 -- nvmf/common.sh@717 -- # local ip 00:16:51.295 16:28:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:51.295 16:28:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:51.295 16:28:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:51.295 16:28:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:51.295 16:28:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:51.295 16:28:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:51.295 16:28:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:51.295 16:28:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:51.295 16:28:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:51.295 16:28:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:16:51.295 16:28:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.295 16:28:25 -- common/autotest_common.sh@10 -- # set +x 00:16:51.553 nvme0n1 00:16:51.553 16:28:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.553 16:28:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:51.553 16:28:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.553 16:28:25 -- common/autotest_common.sh@10 -- # set +x 00:16:51.553 16:28:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:51.553 16:28:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.553 16:28:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.553 16:28:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:51.553 16:28:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.553 16:28:25 -- common/autotest_common.sh@10 -- # set +x 00:16:51.553 16:28:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.553 16:28:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:51.553 16:28:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:16:51.553 16:28:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:51.553 16:28:25 -- host/auth.sh@44 -- # digest=sha256 00:16:51.553 16:28:25 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:51.553 16:28:25 -- host/auth.sh@44 -- # keyid=1 00:16:51.553 16:28:25 -- host/auth.sh@45 -- # key=DHHC-1:00:YzAxZWRkOWRmYWMxN2Q4ZjRjYTRhNjViMTYwNDk3MmZkMjY0YjUzNTVlNzNkNmRhPQxEjg==: 00:16:51.553 16:28:25 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:51.553 16:28:25 -- host/auth.sh@48 -- # echo ffdhe3072 00:16:51.553 16:28:25 -- host/auth.sh@49 -- # echo DHHC-1:00:YzAxZWRkOWRmYWMxN2Q4ZjRjYTRhNjViMTYwNDk3MmZkMjY0YjUzNTVlNzNkNmRhPQxEjg==: 00:16:51.553 16:28:25 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:16:51.553 16:28:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:51.553 16:28:25 -- host/auth.sh@68 -- # digest=sha256 00:16:51.553 16:28:25 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:16:51.553 16:28:25 -- host/auth.sh@68 -- # keyid=1 00:16:51.553 16:28:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:51.553 16:28:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.553 16:28:25 -- common/autotest_common.sh@10 -- # set +x 00:16:51.553 16:28:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.553 16:28:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:51.553 16:28:25 -- nvmf/common.sh@717 -- # local ip 00:16:51.553 16:28:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:51.553 16:28:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:51.553 16:28:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:51.553 16:28:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:51.553 16:28:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:51.553 16:28:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:51.553 16:28:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:51.553 16:28:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:51.553 16:28:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:51.553 16:28:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:16:51.553 16:28:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.553 16:28:25 -- common/autotest_common.sh@10 -- # set +x 00:16:51.811 nvme0n1 00:16:51.811 16:28:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.811 16:28:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:51.811 16:28:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.811 16:28:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:51.811 16:28:25 -- common/autotest_common.sh@10 -- # set +x 00:16:51.811 16:28:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.811 16:28:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.811 16:28:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:51.812 16:28:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.812 16:28:25 -- common/autotest_common.sh@10 -- # set +x 00:16:51.812 16:28:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.812 16:28:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:51.812 16:28:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:16:51.812 16:28:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:51.812 16:28:25 -- host/auth.sh@44 -- # digest=sha256 00:16:51.812 16:28:25 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:51.812 16:28:25 -- host/auth.sh@44 -- # keyid=2 00:16:51.812 16:28:25 -- host/auth.sh@45 -- # key=DHHC-1:01:YzRlYmMwN2Q5ZDY4NWFkZjEwYzY3MjU0ZTRjOGZiZTjeXngU: 00:16:51.812 16:28:25 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:51.812 16:28:25 -- host/auth.sh@48 -- # echo ffdhe3072 00:16:51.812 16:28:25 -- host/auth.sh@49 -- # echo DHHC-1:01:YzRlYmMwN2Q5ZDY4NWFkZjEwYzY3MjU0ZTRjOGZiZTjeXngU: 00:16:51.812 16:28:25 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:16:51.812 16:28:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:51.812 16:28:25 -- host/auth.sh@68 -- # digest=sha256 00:16:51.812 16:28:25 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:16:51.812 16:28:25 -- host/auth.sh@68 -- # keyid=2 00:16:51.812 16:28:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:51.812 16:28:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.812 16:28:25 -- common/autotest_common.sh@10 -- # set +x 00:16:51.812 16:28:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.812 16:28:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:51.812 16:28:25 -- nvmf/common.sh@717 -- # local ip 00:16:51.812 16:28:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:51.812 16:28:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:51.812 16:28:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:51.812 16:28:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:51.812 16:28:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:51.812 16:28:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:51.812 16:28:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:51.812 16:28:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:51.812 16:28:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:51.812 16:28:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:51.812 16:28:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.812 16:28:25 -- common/autotest_common.sh@10 -- # set +x 00:16:51.812 nvme0n1 00:16:51.812 16:28:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.812 16:28:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:51.812 16:28:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.812 16:28:25 -- common/autotest_common.sh@10 -- # set +x 00:16:51.812 16:28:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:51.812 16:28:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:52.072 16:28:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.072 16:28:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:52.072 16:28:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:52.072 16:28:25 -- common/autotest_common.sh@10 -- # set +x 00:16:52.072 16:28:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:52.072 16:28:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:52.072 16:28:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:16:52.072 16:28:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:52.072 16:28:25 -- host/auth.sh@44 -- # digest=sha256 00:16:52.072 16:28:25 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:52.072 16:28:25 -- host/auth.sh@44 -- # keyid=3 00:16:52.072 16:28:25 -- host/auth.sh@45 -- # key=DHHC-1:02:MTZiMGMwNmYwM2NkYjNhOGEzNmI4MTc2Mjg5NTdhNjUzNWZkYTFjN2NiOTM1OWZmpX7ncA==: 00:16:52.072 16:28:25 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:52.072 16:28:25 -- host/auth.sh@48 -- # echo ffdhe3072 00:16:52.072 16:28:25 -- host/auth.sh@49 -- # echo DHHC-1:02:MTZiMGMwNmYwM2NkYjNhOGEzNmI4MTc2Mjg5NTdhNjUzNWZkYTFjN2NiOTM1OWZmpX7ncA==: 00:16:52.072 16:28:25 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:16:52.072 16:28:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:52.072 16:28:25 -- host/auth.sh@68 -- # digest=sha256 00:16:52.072 16:28:25 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:16:52.072 16:28:25 -- host/auth.sh@68 -- # keyid=3 00:16:52.072 16:28:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:52.072 16:28:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:52.072 16:28:25 -- common/autotest_common.sh@10 -- # set +x 00:16:52.072 16:28:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:52.072 16:28:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:52.072 16:28:25 -- nvmf/common.sh@717 -- # local ip 00:16:52.072 16:28:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:52.072 16:28:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:52.072 16:28:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:52.072 16:28:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:52.072 16:28:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:52.072 16:28:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:52.072 16:28:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:52.072 16:28:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:52.072 16:28:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:52.072 16:28:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:16:52.072 16:28:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:52.072 16:28:25 -- common/autotest_common.sh@10 -- # set +x 00:16:52.072 nvme0n1 00:16:52.072 16:28:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:52.072 16:28:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:52.072 16:28:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:52.072 16:28:26 -- common/autotest_common.sh@10 -- # set +x 00:16:52.072 16:28:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:52.072 16:28:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:52.072 16:28:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.072 16:28:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:52.072 16:28:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:52.072 16:28:26 -- common/autotest_common.sh@10 -- # set +x 00:16:52.072 16:28:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:52.072 16:28:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:52.072 16:28:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:16:52.072 16:28:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:52.072 16:28:26 -- host/auth.sh@44 -- # digest=sha256 00:16:52.072 16:28:26 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:52.072 16:28:26 -- host/auth.sh@44 -- # keyid=4 00:16:52.072 16:28:26 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjBjOWQwYjQ2OTNmNDk0Zjg3MzRiMDQ1MWNkYzY2ODY5NDliYmJkNWM1NjQxNjBkZjQyNmY0MjdkZTQyMjFlM6Od0wk=: 00:16:52.072 16:28:26 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:52.072 16:28:26 -- host/auth.sh@48 -- # echo ffdhe3072 00:16:52.072 16:28:26 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjBjOWQwYjQ2OTNmNDk0Zjg3MzRiMDQ1MWNkYzY2ODY5NDliYmJkNWM1NjQxNjBkZjQyNmY0MjdkZTQyMjFlM6Od0wk=: 00:16:52.072 16:28:26 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:16:52.072 16:28:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:52.072 16:28:26 -- host/auth.sh@68 -- # digest=sha256 00:16:52.072 16:28:26 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:16:52.072 16:28:26 -- host/auth.sh@68 -- # keyid=4 00:16:52.072 16:28:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:52.072 16:28:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:52.072 16:28:26 -- common/autotest_common.sh@10 -- # set +x 00:16:52.072 16:28:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:52.072 16:28:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:52.072 16:28:26 -- nvmf/common.sh@717 -- # local ip 00:16:52.072 16:28:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:52.072 16:28:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:52.072 16:28:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:52.072 16:28:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:52.072 16:28:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:52.072 16:28:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:52.072 16:28:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:52.072 16:28:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:52.072 16:28:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:52.072 16:28:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:52.072 16:28:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:52.072 16:28:26 -- common/autotest_common.sh@10 -- # set +x 00:16:52.330 nvme0n1 00:16:52.330 16:28:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:52.330 16:28:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:52.330 16:28:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:52.330 16:28:26 -- common/autotest_common.sh@10 -- # set +x 00:16:52.330 16:28:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:52.330 16:28:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:52.330 16:28:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.331 16:28:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:52.331 16:28:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:52.331 16:28:26 -- common/autotest_common.sh@10 -- # set +x 00:16:52.331 16:28:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:52.331 16:28:26 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:16:52.331 16:28:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:52.331 16:28:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:16:52.331 16:28:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:52.331 16:28:26 -- host/auth.sh@44 -- # digest=sha256 00:16:52.331 16:28:26 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:52.331 16:28:26 -- host/auth.sh@44 -- # keyid=0 00:16:52.331 16:28:26 -- host/auth.sh@45 -- # key=DHHC-1:00:MDcwMmI4YTM0OWE3OTA4OTg2MjhkZGUwNGU0MDNkYWMHRk0g: 00:16:52.331 16:28:26 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:52.331 16:28:26 -- host/auth.sh@48 -- # echo ffdhe4096 00:16:52.896 16:28:26 -- host/auth.sh@49 -- # echo DHHC-1:00:MDcwMmI4YTM0OWE3OTA4OTg2MjhkZGUwNGU0MDNkYWMHRk0g: 00:16:52.896 16:28:26 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:16:52.896 16:28:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:52.896 16:28:26 -- host/auth.sh@68 -- # digest=sha256 00:16:52.896 16:28:26 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:16:52.896 16:28:26 -- host/auth.sh@68 -- # keyid=0 00:16:52.896 16:28:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:52.896 16:28:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:52.896 16:28:26 -- common/autotest_common.sh@10 -- # set +x 00:16:53.156 16:28:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.156 16:28:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:53.156 16:28:26 -- nvmf/common.sh@717 -- # local ip 00:16:53.156 16:28:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:53.156 16:28:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:53.156 16:28:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:53.156 16:28:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:53.156 16:28:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:53.156 16:28:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:53.156 16:28:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:53.156 16:28:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:53.156 16:28:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:53.156 16:28:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:16:53.156 16:28:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.156 16:28:26 -- common/autotest_common.sh@10 -- # set +x 00:16:53.156 nvme0n1 00:16:53.156 16:28:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.156 16:28:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:53.156 16:28:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:53.156 16:28:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.156 16:28:27 -- common/autotest_common.sh@10 -- # set +x 00:16:53.156 16:28:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.414 16:28:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.414 16:28:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:53.414 16:28:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.414 16:28:27 -- common/autotest_common.sh@10 -- # set +x 00:16:53.414 16:28:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.414 16:28:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:53.414 16:28:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:16:53.414 16:28:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:53.414 16:28:27 -- host/auth.sh@44 -- # digest=sha256 00:16:53.414 16:28:27 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:53.414 16:28:27 -- host/auth.sh@44 -- # keyid=1 00:16:53.414 16:28:27 -- host/auth.sh@45 -- # key=DHHC-1:00:YzAxZWRkOWRmYWMxN2Q4ZjRjYTRhNjViMTYwNDk3MmZkMjY0YjUzNTVlNzNkNmRhPQxEjg==: 00:16:53.414 16:28:27 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:53.414 16:28:27 -- host/auth.sh@48 -- # echo ffdhe4096 00:16:53.414 16:28:27 -- host/auth.sh@49 -- # echo DHHC-1:00:YzAxZWRkOWRmYWMxN2Q4ZjRjYTRhNjViMTYwNDk3MmZkMjY0YjUzNTVlNzNkNmRhPQxEjg==: 00:16:53.414 16:28:27 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:16:53.414 16:28:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:53.414 16:28:27 -- host/auth.sh@68 -- # digest=sha256 00:16:53.414 16:28:27 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:16:53.414 16:28:27 -- host/auth.sh@68 -- # keyid=1 00:16:53.414 16:28:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:53.414 16:28:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.414 16:28:27 -- common/autotest_common.sh@10 -- # set +x 00:16:53.414 16:28:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.414 16:28:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:53.414 16:28:27 -- nvmf/common.sh@717 -- # local ip 00:16:53.414 16:28:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:53.414 16:28:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:53.414 16:28:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:53.414 16:28:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:53.414 16:28:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:53.414 16:28:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:53.414 16:28:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:53.414 16:28:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:53.414 16:28:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:53.414 16:28:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:16:53.414 16:28:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.414 16:28:27 -- common/autotest_common.sh@10 -- # set +x 00:16:53.414 nvme0n1 00:16:53.414 16:28:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.414 16:28:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:53.414 16:28:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.414 16:28:27 -- common/autotest_common.sh@10 -- # set +x 00:16:53.414 16:28:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:53.414 16:28:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.673 16:28:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.673 16:28:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:53.673 16:28:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.673 16:28:27 -- common/autotest_common.sh@10 -- # set +x 00:16:53.673 16:28:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.673 16:28:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:53.673 16:28:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:16:53.673 16:28:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:53.673 16:28:27 -- host/auth.sh@44 -- # digest=sha256 00:16:53.673 16:28:27 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:53.673 16:28:27 -- host/auth.sh@44 -- # keyid=2 00:16:53.673 16:28:27 -- host/auth.sh@45 -- # key=DHHC-1:01:YzRlYmMwN2Q5ZDY4NWFkZjEwYzY3MjU0ZTRjOGZiZTjeXngU: 00:16:53.673 16:28:27 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:53.673 16:28:27 -- host/auth.sh@48 -- # echo ffdhe4096 00:16:53.673 16:28:27 -- host/auth.sh@49 -- # echo DHHC-1:01:YzRlYmMwN2Q5ZDY4NWFkZjEwYzY3MjU0ZTRjOGZiZTjeXngU: 00:16:53.673 16:28:27 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:16:53.673 16:28:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:53.673 16:28:27 -- host/auth.sh@68 -- # digest=sha256 00:16:53.673 16:28:27 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:16:53.673 16:28:27 -- host/auth.sh@68 -- # keyid=2 00:16:53.673 16:28:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:53.673 16:28:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.673 16:28:27 -- common/autotest_common.sh@10 -- # set +x 00:16:53.673 16:28:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.673 16:28:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:53.673 16:28:27 -- nvmf/common.sh@717 -- # local ip 00:16:53.673 16:28:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:53.673 16:28:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:53.673 16:28:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:53.673 16:28:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:53.673 16:28:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:53.673 16:28:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:53.673 16:28:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:53.673 16:28:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:53.673 16:28:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:53.673 16:28:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:53.673 16:28:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.673 16:28:27 -- common/autotest_common.sh@10 -- # set +x 00:16:53.673 nvme0n1 00:16:53.673 16:28:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.673 16:28:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:53.673 16:28:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.673 16:28:27 -- common/autotest_common.sh@10 -- # set +x 00:16:53.673 16:28:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:53.673 16:28:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.931 16:28:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.931 16:28:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:53.931 16:28:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.931 16:28:27 -- common/autotest_common.sh@10 -- # set +x 00:16:53.931 16:28:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.931 16:28:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:53.931 16:28:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:16:53.931 16:28:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:53.931 16:28:27 -- host/auth.sh@44 -- # digest=sha256 00:16:53.931 16:28:27 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:53.931 16:28:27 -- host/auth.sh@44 -- # keyid=3 00:16:53.931 16:28:27 -- host/auth.sh@45 -- # key=DHHC-1:02:MTZiMGMwNmYwM2NkYjNhOGEzNmI4MTc2Mjg5NTdhNjUzNWZkYTFjN2NiOTM1OWZmpX7ncA==: 00:16:53.931 16:28:27 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:53.931 16:28:27 -- host/auth.sh@48 -- # echo ffdhe4096 00:16:53.931 16:28:27 -- host/auth.sh@49 -- # echo DHHC-1:02:MTZiMGMwNmYwM2NkYjNhOGEzNmI4MTc2Mjg5NTdhNjUzNWZkYTFjN2NiOTM1OWZmpX7ncA==: 00:16:53.931 16:28:27 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:16:53.931 16:28:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:53.931 16:28:27 -- host/auth.sh@68 -- # digest=sha256 00:16:53.931 16:28:27 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:16:53.931 16:28:27 -- host/auth.sh@68 -- # keyid=3 00:16:53.931 16:28:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:53.931 16:28:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.931 16:28:27 -- common/autotest_common.sh@10 -- # set +x 00:16:53.931 16:28:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.931 16:28:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:53.931 16:28:27 -- nvmf/common.sh@717 -- # local ip 00:16:53.931 16:28:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:53.931 16:28:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:53.931 16:28:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:53.931 16:28:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:53.931 16:28:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:53.931 16:28:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:53.932 16:28:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:53.932 16:28:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:53.932 16:28:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:53.932 16:28:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:16:53.932 16:28:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.932 16:28:27 -- common/autotest_common.sh@10 -- # set +x 00:16:53.932 nvme0n1 00:16:53.932 16:28:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.932 16:28:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:53.932 16:28:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:53.932 16:28:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.932 16:28:27 -- common/autotest_common.sh@10 -- # set +x 00:16:53.932 16:28:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:54.190 16:28:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.190 16:28:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:54.190 16:28:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:54.190 16:28:27 -- common/autotest_common.sh@10 -- # set +x 00:16:54.190 16:28:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:54.190 16:28:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:54.190 16:28:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:16:54.190 16:28:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:54.190 16:28:28 -- host/auth.sh@44 -- # digest=sha256 00:16:54.190 16:28:28 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:54.190 16:28:28 -- host/auth.sh@44 -- # keyid=4 00:16:54.190 16:28:28 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjBjOWQwYjQ2OTNmNDk0Zjg3MzRiMDQ1MWNkYzY2ODY5NDliYmJkNWM1NjQxNjBkZjQyNmY0MjdkZTQyMjFlM6Od0wk=: 00:16:54.190 16:28:28 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:54.190 16:28:28 -- host/auth.sh@48 -- # echo ffdhe4096 00:16:54.190 16:28:28 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjBjOWQwYjQ2OTNmNDk0Zjg3MzRiMDQ1MWNkYzY2ODY5NDliYmJkNWM1NjQxNjBkZjQyNmY0MjdkZTQyMjFlM6Od0wk=: 00:16:54.190 16:28:28 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:16:54.190 16:28:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:54.190 16:28:28 -- host/auth.sh@68 -- # digest=sha256 00:16:54.190 16:28:28 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:16:54.190 16:28:28 -- host/auth.sh@68 -- # keyid=4 00:16:54.190 16:28:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:54.190 16:28:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:54.190 16:28:28 -- common/autotest_common.sh@10 -- # set +x 00:16:54.190 16:28:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:54.190 16:28:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:54.190 16:28:28 -- nvmf/common.sh@717 -- # local ip 00:16:54.190 16:28:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:54.190 16:28:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:54.190 16:28:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:54.190 16:28:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:54.190 16:28:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:54.190 16:28:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:54.190 16:28:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:54.190 16:28:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:54.190 16:28:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:54.190 16:28:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:54.190 16:28:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:54.190 16:28:28 -- common/autotest_common.sh@10 -- # set +x 00:16:54.190 nvme0n1 00:16:54.190 16:28:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:54.190 16:28:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:54.190 16:28:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:54.190 16:28:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:54.190 16:28:28 -- common/autotest_common.sh@10 -- # set +x 00:16:54.448 16:28:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:54.448 16:28:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.448 16:28:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:54.448 16:28:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:54.448 16:28:28 -- common/autotest_common.sh@10 -- # set +x 00:16:54.448 16:28:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:54.448 16:28:28 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:16:54.448 16:28:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:54.448 16:28:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:16:54.448 16:28:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:54.448 16:28:28 -- host/auth.sh@44 -- # digest=sha256 00:16:54.448 16:28:28 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:54.448 16:28:28 -- host/auth.sh@44 -- # keyid=0 00:16:54.448 16:28:28 -- host/auth.sh@45 -- # key=DHHC-1:00:MDcwMmI4YTM0OWE3OTA4OTg2MjhkZGUwNGU0MDNkYWMHRk0g: 00:16:54.448 16:28:28 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:54.448 16:28:28 -- host/auth.sh@48 -- # echo ffdhe6144 00:16:56.353 16:28:30 -- host/auth.sh@49 -- # echo DHHC-1:00:MDcwMmI4YTM0OWE3OTA4OTg2MjhkZGUwNGU0MDNkYWMHRk0g: 00:16:56.353 16:28:30 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:16:56.353 16:28:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:56.353 16:28:30 -- host/auth.sh@68 -- # digest=sha256 00:16:56.353 16:28:30 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:16:56.353 16:28:30 -- host/auth.sh@68 -- # keyid=0 00:16:56.353 16:28:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:56.353 16:28:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:56.353 16:28:30 -- common/autotest_common.sh@10 -- # set +x 00:16:56.353 16:28:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:56.353 16:28:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:56.353 16:28:30 -- nvmf/common.sh@717 -- # local ip 00:16:56.353 16:28:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:56.353 16:28:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:56.353 16:28:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:56.353 16:28:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:56.353 16:28:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:56.353 16:28:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:56.353 16:28:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:56.353 16:28:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:56.353 16:28:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:56.353 16:28:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:16:56.353 16:28:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:56.353 16:28:30 -- common/autotest_common.sh@10 -- # set +x 00:16:56.353 nvme0n1 00:16:56.353 16:28:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:56.353 16:28:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:56.353 16:28:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:56.353 16:28:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:56.353 16:28:30 -- common/autotest_common.sh@10 -- # set +x 00:16:56.353 16:28:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:56.611 16:28:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.611 16:28:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:56.611 16:28:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:56.611 16:28:30 -- common/autotest_common.sh@10 -- # set +x 00:16:56.611 16:28:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:56.611 16:28:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:56.611 16:28:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:16:56.611 16:28:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:56.611 16:28:30 -- host/auth.sh@44 -- # digest=sha256 00:16:56.611 16:28:30 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:56.611 16:28:30 -- host/auth.sh@44 -- # keyid=1 00:16:56.611 16:28:30 -- host/auth.sh@45 -- # key=DHHC-1:00:YzAxZWRkOWRmYWMxN2Q4ZjRjYTRhNjViMTYwNDk3MmZkMjY0YjUzNTVlNzNkNmRhPQxEjg==: 00:16:56.611 16:28:30 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:56.611 16:28:30 -- host/auth.sh@48 -- # echo ffdhe6144 00:16:56.611 16:28:30 -- host/auth.sh@49 -- # echo DHHC-1:00:YzAxZWRkOWRmYWMxN2Q4ZjRjYTRhNjViMTYwNDk3MmZkMjY0YjUzNTVlNzNkNmRhPQxEjg==: 00:16:56.611 16:28:30 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:16:56.611 16:28:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:56.611 16:28:30 -- host/auth.sh@68 -- # digest=sha256 00:16:56.611 16:28:30 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:16:56.611 16:28:30 -- host/auth.sh@68 -- # keyid=1 00:16:56.611 16:28:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:56.611 16:28:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:56.611 16:28:30 -- common/autotest_common.sh@10 -- # set +x 00:16:56.611 16:28:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:56.611 16:28:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:56.611 16:28:30 -- nvmf/common.sh@717 -- # local ip 00:16:56.611 16:28:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:56.611 16:28:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:56.611 16:28:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:56.611 16:28:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:56.611 16:28:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:56.611 16:28:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:56.611 16:28:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:56.611 16:28:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:56.611 16:28:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:56.611 16:28:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:16:56.611 16:28:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:56.611 16:28:30 -- common/autotest_common.sh@10 -- # set +x 00:16:56.869 nvme0n1 00:16:56.869 16:28:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:56.869 16:28:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:56.869 16:28:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:56.869 16:28:30 -- common/autotest_common.sh@10 -- # set +x 00:16:56.869 16:28:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:56.869 16:28:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:56.869 16:28:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.869 16:28:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:56.869 16:28:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:56.869 16:28:30 -- common/autotest_common.sh@10 -- # set +x 00:16:56.869 16:28:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:56.869 16:28:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:56.869 16:28:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:16:56.869 16:28:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:56.869 16:28:30 -- host/auth.sh@44 -- # digest=sha256 00:16:56.869 16:28:30 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:56.869 16:28:30 -- host/auth.sh@44 -- # keyid=2 00:16:56.869 16:28:30 -- host/auth.sh@45 -- # key=DHHC-1:01:YzRlYmMwN2Q5ZDY4NWFkZjEwYzY3MjU0ZTRjOGZiZTjeXngU: 00:16:56.869 16:28:30 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:56.869 16:28:30 -- host/auth.sh@48 -- # echo ffdhe6144 00:16:56.869 16:28:30 -- host/auth.sh@49 -- # echo DHHC-1:01:YzRlYmMwN2Q5ZDY4NWFkZjEwYzY3MjU0ZTRjOGZiZTjeXngU: 00:16:56.869 16:28:30 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:16:56.869 16:28:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:56.869 16:28:30 -- host/auth.sh@68 -- # digest=sha256 00:16:56.869 16:28:30 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:16:56.869 16:28:30 -- host/auth.sh@68 -- # keyid=2 00:16:56.869 16:28:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:56.869 16:28:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:56.869 16:28:30 -- common/autotest_common.sh@10 -- # set +x 00:16:56.869 16:28:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:56.869 16:28:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:56.869 16:28:30 -- nvmf/common.sh@717 -- # local ip 00:16:56.869 16:28:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:56.869 16:28:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:56.869 16:28:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:56.869 16:28:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:56.869 16:28:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:56.869 16:28:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:56.869 16:28:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:56.869 16:28:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:56.869 16:28:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:56.869 16:28:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:56.869 16:28:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:56.869 16:28:30 -- common/autotest_common.sh@10 -- # set +x 00:16:57.434 nvme0n1 00:16:57.434 16:28:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:57.434 16:28:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:57.434 16:28:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:57.434 16:28:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:57.434 16:28:31 -- common/autotest_common.sh@10 -- # set +x 00:16:57.434 16:28:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:57.434 16:28:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.434 16:28:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:57.434 16:28:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:57.434 16:28:31 -- common/autotest_common.sh@10 -- # set +x 00:16:57.434 16:28:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:57.434 16:28:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:57.434 16:28:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:16:57.434 16:28:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:57.434 16:28:31 -- host/auth.sh@44 -- # digest=sha256 00:16:57.434 16:28:31 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:57.434 16:28:31 -- host/auth.sh@44 -- # keyid=3 00:16:57.434 16:28:31 -- host/auth.sh@45 -- # key=DHHC-1:02:MTZiMGMwNmYwM2NkYjNhOGEzNmI4MTc2Mjg5NTdhNjUzNWZkYTFjN2NiOTM1OWZmpX7ncA==: 00:16:57.434 16:28:31 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:57.434 16:28:31 -- host/auth.sh@48 -- # echo ffdhe6144 00:16:57.434 16:28:31 -- host/auth.sh@49 -- # echo DHHC-1:02:MTZiMGMwNmYwM2NkYjNhOGEzNmI4MTc2Mjg5NTdhNjUzNWZkYTFjN2NiOTM1OWZmpX7ncA==: 00:16:57.434 16:28:31 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:16:57.434 16:28:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:57.434 16:28:31 -- host/auth.sh@68 -- # digest=sha256 00:16:57.434 16:28:31 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:16:57.435 16:28:31 -- host/auth.sh@68 -- # keyid=3 00:16:57.435 16:28:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:57.435 16:28:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:57.435 16:28:31 -- common/autotest_common.sh@10 -- # set +x 00:16:57.435 16:28:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:57.435 16:28:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:57.435 16:28:31 -- nvmf/common.sh@717 -- # local ip 00:16:57.435 16:28:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:57.435 16:28:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:57.435 16:28:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:57.435 16:28:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:57.435 16:28:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:57.435 16:28:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:57.435 16:28:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:57.435 16:28:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:57.435 16:28:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:57.435 16:28:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:16:57.435 16:28:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:57.435 16:28:31 -- common/autotest_common.sh@10 -- # set +x 00:16:57.692 nvme0n1 00:16:57.692 16:28:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:57.692 16:28:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:57.692 16:28:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:57.692 16:28:31 -- common/autotest_common.sh@10 -- # set +x 00:16:57.692 16:28:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:57.692 16:28:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:57.692 16:28:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.692 16:28:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:57.692 16:28:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:57.692 16:28:31 -- common/autotest_common.sh@10 -- # set +x 00:16:57.692 16:28:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:57.692 16:28:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:57.692 16:28:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:16:57.692 16:28:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:57.692 16:28:31 -- host/auth.sh@44 -- # digest=sha256 00:16:57.692 16:28:31 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:57.692 16:28:31 -- host/auth.sh@44 -- # keyid=4 00:16:57.692 16:28:31 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjBjOWQwYjQ2OTNmNDk0Zjg3MzRiMDQ1MWNkYzY2ODY5NDliYmJkNWM1NjQxNjBkZjQyNmY0MjdkZTQyMjFlM6Od0wk=: 00:16:57.692 16:28:31 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:57.692 16:28:31 -- host/auth.sh@48 -- # echo ffdhe6144 00:16:57.692 16:28:31 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjBjOWQwYjQ2OTNmNDk0Zjg3MzRiMDQ1MWNkYzY2ODY5NDliYmJkNWM1NjQxNjBkZjQyNmY0MjdkZTQyMjFlM6Od0wk=: 00:16:57.692 16:28:31 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:16:57.692 16:28:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:57.692 16:28:31 -- host/auth.sh@68 -- # digest=sha256 00:16:57.692 16:28:31 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:16:57.692 16:28:31 -- host/auth.sh@68 -- # keyid=4 00:16:57.692 16:28:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:57.692 16:28:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:57.692 16:28:31 -- common/autotest_common.sh@10 -- # set +x 00:16:57.692 16:28:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:57.692 16:28:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:57.692 16:28:31 -- nvmf/common.sh@717 -- # local ip 00:16:57.692 16:28:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:57.692 16:28:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:57.692 16:28:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:57.692 16:28:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:57.692 16:28:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:57.692 16:28:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:57.692 16:28:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:57.692 16:28:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:57.693 16:28:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:57.693 16:28:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:57.693 16:28:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:57.693 16:28:31 -- common/autotest_common.sh@10 -- # set +x 00:16:58.257 nvme0n1 00:16:58.257 16:28:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:58.257 16:28:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:58.258 16:28:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:58.258 16:28:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:58.258 16:28:32 -- common/autotest_common.sh@10 -- # set +x 00:16:58.258 16:28:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:58.258 16:28:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.258 16:28:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:58.258 16:28:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:58.258 16:28:32 -- common/autotest_common.sh@10 -- # set +x 00:16:58.258 16:28:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:58.258 16:28:32 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:16:58.258 16:28:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:58.258 16:28:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:16:58.258 16:28:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:58.258 16:28:32 -- host/auth.sh@44 -- # digest=sha256 00:16:58.258 16:28:32 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:58.258 16:28:32 -- host/auth.sh@44 -- # keyid=0 00:16:58.258 16:28:32 -- host/auth.sh@45 -- # key=DHHC-1:00:MDcwMmI4YTM0OWE3OTA4OTg2MjhkZGUwNGU0MDNkYWMHRk0g: 00:16:58.258 16:28:32 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:58.258 16:28:32 -- host/auth.sh@48 -- # echo ffdhe8192 00:17:02.440 16:28:36 -- host/auth.sh@49 -- # echo DHHC-1:00:MDcwMmI4YTM0OWE3OTA4OTg2MjhkZGUwNGU0MDNkYWMHRk0g: 00:17:02.440 16:28:36 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:17:02.440 16:28:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:02.440 16:28:36 -- host/auth.sh@68 -- # digest=sha256 00:17:02.440 16:28:36 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:17:02.440 16:28:36 -- host/auth.sh@68 -- # keyid=0 00:17:02.440 16:28:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:02.440 16:28:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:02.440 16:28:36 -- common/autotest_common.sh@10 -- # set +x 00:17:02.440 16:28:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:02.440 16:28:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:02.440 16:28:36 -- nvmf/common.sh@717 -- # local ip 00:17:02.440 16:28:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:02.440 16:28:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:02.440 16:28:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:02.440 16:28:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:02.440 16:28:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:02.440 16:28:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:02.440 16:28:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:02.440 16:28:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:02.440 16:28:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:02.440 16:28:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:17:02.440 16:28:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:02.440 16:28:36 -- common/autotest_common.sh@10 -- # set +x 00:17:02.698 nvme0n1 00:17:02.698 16:28:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:02.698 16:28:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:02.698 16:28:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:02.698 16:28:36 -- common/autotest_common.sh@10 -- # set +x 00:17:02.698 16:28:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:02.698 16:28:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:02.698 16:28:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.698 16:28:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:02.698 16:28:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:02.698 16:28:36 -- common/autotest_common.sh@10 -- # set +x 00:17:02.698 16:28:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:02.698 16:28:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:02.698 16:28:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:17:02.698 16:28:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:02.698 16:28:36 -- host/auth.sh@44 -- # digest=sha256 00:17:02.698 16:28:36 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:02.698 16:28:36 -- host/auth.sh@44 -- # keyid=1 00:17:02.698 16:28:36 -- host/auth.sh@45 -- # key=DHHC-1:00:YzAxZWRkOWRmYWMxN2Q4ZjRjYTRhNjViMTYwNDk3MmZkMjY0YjUzNTVlNzNkNmRhPQxEjg==: 00:17:02.698 16:28:36 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:17:02.698 16:28:36 -- host/auth.sh@48 -- # echo ffdhe8192 00:17:02.698 16:28:36 -- host/auth.sh@49 -- # echo DHHC-1:00:YzAxZWRkOWRmYWMxN2Q4ZjRjYTRhNjViMTYwNDk3MmZkMjY0YjUzNTVlNzNkNmRhPQxEjg==: 00:17:02.698 16:28:36 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:17:02.698 16:28:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:02.698 16:28:36 -- host/auth.sh@68 -- # digest=sha256 00:17:02.698 16:28:36 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:17:02.698 16:28:36 -- host/auth.sh@68 -- # keyid=1 00:17:02.698 16:28:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:02.698 16:28:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:02.698 16:28:36 -- common/autotest_common.sh@10 -- # set +x 00:17:02.956 16:28:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:02.956 16:28:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:02.956 16:28:36 -- nvmf/common.sh@717 -- # local ip 00:17:02.956 16:28:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:02.956 16:28:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:02.956 16:28:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:02.956 16:28:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:02.956 16:28:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:02.956 16:28:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:02.956 16:28:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:02.956 16:28:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:02.956 16:28:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:02.956 16:28:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:17:02.956 16:28:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:02.956 16:28:36 -- common/autotest_common.sh@10 -- # set +x 00:17:03.524 nvme0n1 00:17:03.524 16:28:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:03.524 16:28:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:03.524 16:28:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:03.524 16:28:37 -- common/autotest_common.sh@10 -- # set +x 00:17:03.524 16:28:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:03.524 16:28:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:03.524 16:28:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.524 16:28:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:03.524 16:28:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:03.524 16:28:37 -- common/autotest_common.sh@10 -- # set +x 00:17:03.524 16:28:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:03.524 16:28:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:03.524 16:28:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:17:03.524 16:28:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:03.524 16:28:37 -- host/auth.sh@44 -- # digest=sha256 00:17:03.524 16:28:37 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:03.524 16:28:37 -- host/auth.sh@44 -- # keyid=2 00:17:03.524 16:28:37 -- host/auth.sh@45 -- # key=DHHC-1:01:YzRlYmMwN2Q5ZDY4NWFkZjEwYzY3MjU0ZTRjOGZiZTjeXngU: 00:17:03.524 16:28:37 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:17:03.524 16:28:37 -- host/auth.sh@48 -- # echo ffdhe8192 00:17:03.524 16:28:37 -- host/auth.sh@49 -- # echo DHHC-1:01:YzRlYmMwN2Q5ZDY4NWFkZjEwYzY3MjU0ZTRjOGZiZTjeXngU: 00:17:03.524 16:28:37 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:17:03.524 16:28:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:03.524 16:28:37 -- host/auth.sh@68 -- # digest=sha256 00:17:03.524 16:28:37 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:17:03.524 16:28:37 -- host/auth.sh@68 -- # keyid=2 00:17:03.524 16:28:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:03.524 16:28:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:03.524 16:28:37 -- common/autotest_common.sh@10 -- # set +x 00:17:03.524 16:28:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:03.524 16:28:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:03.524 16:28:37 -- nvmf/common.sh@717 -- # local ip 00:17:03.524 16:28:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:03.524 16:28:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:03.524 16:28:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:03.524 16:28:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:03.524 16:28:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:03.524 16:28:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:03.524 16:28:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:03.524 16:28:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:03.524 16:28:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:03.524 16:28:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:03.524 16:28:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:03.524 16:28:37 -- common/autotest_common.sh@10 -- # set +x 00:17:04.091 nvme0n1 00:17:04.091 16:28:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.091 16:28:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.091 16:28:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.091 16:28:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:04.091 16:28:38 -- common/autotest_common.sh@10 -- # set +x 00:17:04.091 16:28:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.091 16:28:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.091 16:28:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.091 16:28:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.091 16:28:38 -- common/autotest_common.sh@10 -- # set +x 00:17:04.091 16:28:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.091 16:28:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:04.091 16:28:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:17:04.091 16:28:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:04.091 16:28:38 -- host/auth.sh@44 -- # digest=sha256 00:17:04.091 16:28:38 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:04.091 16:28:38 -- host/auth.sh@44 -- # keyid=3 00:17:04.091 16:28:38 -- host/auth.sh@45 -- # key=DHHC-1:02:MTZiMGMwNmYwM2NkYjNhOGEzNmI4MTc2Mjg5NTdhNjUzNWZkYTFjN2NiOTM1OWZmpX7ncA==: 00:17:04.091 16:28:38 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:17:04.091 16:28:38 -- host/auth.sh@48 -- # echo ffdhe8192 00:17:04.091 16:28:38 -- host/auth.sh@49 -- # echo DHHC-1:02:MTZiMGMwNmYwM2NkYjNhOGEzNmI4MTc2Mjg5NTdhNjUzNWZkYTFjN2NiOTM1OWZmpX7ncA==: 00:17:04.091 16:28:38 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:17:04.091 16:28:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:04.091 16:28:38 -- host/auth.sh@68 -- # digest=sha256 00:17:04.091 16:28:38 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:17:04.091 16:28:38 -- host/auth.sh@68 -- # keyid=3 00:17:04.091 16:28:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:04.091 16:28:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.091 16:28:38 -- common/autotest_common.sh@10 -- # set +x 00:17:04.091 16:28:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.091 16:28:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:04.091 16:28:38 -- nvmf/common.sh@717 -- # local ip 00:17:04.091 16:28:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:04.091 16:28:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:04.091 16:28:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.091 16:28:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.091 16:28:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:04.091 16:28:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.091 16:28:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:04.091 16:28:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:04.091 16:28:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:04.091 16:28:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:17:04.091 16:28:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.091 16:28:38 -- common/autotest_common.sh@10 -- # set +x 00:17:04.658 nvme0n1 00:17:04.658 16:28:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.658 16:28:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.658 16:28:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:04.658 16:28:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.658 16:28:38 -- common/autotest_common.sh@10 -- # set +x 00:17:04.658 16:28:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.917 16:28:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.917 16:28:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.917 16:28:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.917 16:28:38 -- common/autotest_common.sh@10 -- # set +x 00:17:04.917 16:28:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.917 16:28:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:04.917 16:28:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:17:04.917 16:28:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:04.917 16:28:38 -- host/auth.sh@44 -- # digest=sha256 00:17:04.917 16:28:38 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:04.917 16:28:38 -- host/auth.sh@44 -- # keyid=4 00:17:04.917 16:28:38 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjBjOWQwYjQ2OTNmNDk0Zjg3MzRiMDQ1MWNkYzY2ODY5NDliYmJkNWM1NjQxNjBkZjQyNmY0MjdkZTQyMjFlM6Od0wk=: 00:17:04.917 16:28:38 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:17:04.917 16:28:38 -- host/auth.sh@48 -- # echo ffdhe8192 00:17:04.917 16:28:38 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjBjOWQwYjQ2OTNmNDk0Zjg3MzRiMDQ1MWNkYzY2ODY5NDliYmJkNWM1NjQxNjBkZjQyNmY0MjdkZTQyMjFlM6Od0wk=: 00:17:04.917 16:28:38 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:17:04.917 16:28:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:04.917 16:28:38 -- host/auth.sh@68 -- # digest=sha256 00:17:04.917 16:28:38 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:17:04.917 16:28:38 -- host/auth.sh@68 -- # keyid=4 00:17:04.917 16:28:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:04.917 16:28:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.917 16:28:38 -- common/autotest_common.sh@10 -- # set +x 00:17:04.917 16:28:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.917 16:28:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:04.917 16:28:38 -- nvmf/common.sh@717 -- # local ip 00:17:04.917 16:28:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:04.917 16:28:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:04.917 16:28:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.917 16:28:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.917 16:28:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:04.917 16:28:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.917 16:28:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:04.917 16:28:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:04.917 16:28:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:04.917 16:28:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:04.917 16:28:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.917 16:28:38 -- common/autotest_common.sh@10 -- # set +x 00:17:05.484 nvme0n1 00:17:05.484 16:28:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.484 16:28:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:05.484 16:28:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.484 16:28:39 -- common/autotest_common.sh@10 -- # set +x 00:17:05.484 16:28:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:05.484 16:28:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.484 16:28:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.484 16:28:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.484 16:28:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.484 16:28:39 -- common/autotest_common.sh@10 -- # set +x 00:17:05.484 16:28:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.484 16:28:39 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:17:05.484 16:28:39 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:17:05.484 16:28:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:05.484 16:28:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:17:05.484 16:28:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:05.484 16:28:39 -- host/auth.sh@44 -- # digest=sha384 00:17:05.484 16:28:39 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:05.484 16:28:39 -- host/auth.sh@44 -- # keyid=0 00:17:05.484 16:28:39 -- host/auth.sh@45 -- # key=DHHC-1:00:MDcwMmI4YTM0OWE3OTA4OTg2MjhkZGUwNGU0MDNkYWMHRk0g: 00:17:05.484 16:28:39 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:05.484 16:28:39 -- host/auth.sh@48 -- # echo ffdhe2048 00:17:05.484 16:28:39 -- host/auth.sh@49 -- # echo DHHC-1:00:MDcwMmI4YTM0OWE3OTA4OTg2MjhkZGUwNGU0MDNkYWMHRk0g: 00:17:05.485 16:28:39 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:17:05.485 16:28:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:05.485 16:28:39 -- host/auth.sh@68 -- # digest=sha384 00:17:05.485 16:28:39 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:17:05.485 16:28:39 -- host/auth.sh@68 -- # keyid=0 00:17:05.485 16:28:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:05.485 16:28:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.485 16:28:39 -- common/autotest_common.sh@10 -- # set +x 00:17:05.485 16:28:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.485 16:28:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:05.485 16:28:39 -- nvmf/common.sh@717 -- # local ip 00:17:05.485 16:28:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:05.485 16:28:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:05.485 16:28:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:05.485 16:28:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:05.485 16:28:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:05.485 16:28:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:05.485 16:28:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:05.485 16:28:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:05.485 16:28:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:05.485 16:28:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:17:05.485 16:28:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.485 16:28:39 -- common/autotest_common.sh@10 -- # set +x 00:17:05.743 nvme0n1 00:17:05.743 16:28:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.743 16:28:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:05.743 16:28:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.743 16:28:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:05.743 16:28:39 -- common/autotest_common.sh@10 -- # set +x 00:17:05.743 16:28:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.743 16:28:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.743 16:28:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.743 16:28:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.743 16:28:39 -- common/autotest_common.sh@10 -- # set +x 00:17:05.743 16:28:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.743 16:28:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:05.743 16:28:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:17:05.743 16:28:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:05.743 16:28:39 -- host/auth.sh@44 -- # digest=sha384 00:17:05.743 16:28:39 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:05.743 16:28:39 -- host/auth.sh@44 -- # keyid=1 00:17:05.744 16:28:39 -- host/auth.sh@45 -- # key=DHHC-1:00:YzAxZWRkOWRmYWMxN2Q4ZjRjYTRhNjViMTYwNDk3MmZkMjY0YjUzNTVlNzNkNmRhPQxEjg==: 00:17:05.744 16:28:39 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:05.744 16:28:39 -- host/auth.sh@48 -- # echo ffdhe2048 00:17:05.744 16:28:39 -- host/auth.sh@49 -- # echo DHHC-1:00:YzAxZWRkOWRmYWMxN2Q4ZjRjYTRhNjViMTYwNDk3MmZkMjY0YjUzNTVlNzNkNmRhPQxEjg==: 00:17:05.744 16:28:39 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:17:05.744 16:28:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:05.744 16:28:39 -- host/auth.sh@68 -- # digest=sha384 00:17:05.744 16:28:39 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:17:05.744 16:28:39 -- host/auth.sh@68 -- # keyid=1 00:17:05.744 16:28:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:05.744 16:28:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.744 16:28:39 -- common/autotest_common.sh@10 -- # set +x 00:17:05.744 16:28:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.744 16:28:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:05.744 16:28:39 -- nvmf/common.sh@717 -- # local ip 00:17:05.744 16:28:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:05.744 16:28:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:05.744 16:28:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:05.744 16:28:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:05.744 16:28:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:05.744 16:28:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:05.744 16:28:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:05.744 16:28:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:05.744 16:28:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:05.744 16:28:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:17:05.744 16:28:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.744 16:28:39 -- common/autotest_common.sh@10 -- # set +x 00:17:05.744 nvme0n1 00:17:05.744 16:28:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.744 16:28:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:05.744 16:28:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.744 16:28:39 -- common/autotest_common.sh@10 -- # set +x 00:17:05.744 16:28:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:05.744 16:28:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.744 16:28:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.744 16:28:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.744 16:28:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.744 16:28:39 -- common/autotest_common.sh@10 -- # set +x 00:17:05.744 16:28:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.744 16:28:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:05.744 16:28:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:17:05.744 16:28:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:05.744 16:28:39 -- host/auth.sh@44 -- # digest=sha384 00:17:06.003 16:28:39 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:06.003 16:28:39 -- host/auth.sh@44 -- # keyid=2 00:17:06.003 16:28:39 -- host/auth.sh@45 -- # key=DHHC-1:01:YzRlYmMwN2Q5ZDY4NWFkZjEwYzY3MjU0ZTRjOGZiZTjeXngU: 00:17:06.003 16:28:39 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:06.003 16:28:39 -- host/auth.sh@48 -- # echo ffdhe2048 00:17:06.003 16:28:39 -- host/auth.sh@49 -- # echo DHHC-1:01:YzRlYmMwN2Q5ZDY4NWFkZjEwYzY3MjU0ZTRjOGZiZTjeXngU: 00:17:06.003 16:28:39 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:17:06.003 16:28:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:06.003 16:28:39 -- host/auth.sh@68 -- # digest=sha384 00:17:06.003 16:28:39 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:17:06.003 16:28:39 -- host/auth.sh@68 -- # keyid=2 00:17:06.003 16:28:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:06.003 16:28:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.003 16:28:39 -- common/autotest_common.sh@10 -- # set +x 00:17:06.003 16:28:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.003 16:28:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:06.003 16:28:39 -- nvmf/common.sh@717 -- # local ip 00:17:06.003 16:28:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:06.003 16:28:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:06.003 16:28:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.003 16:28:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.003 16:28:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:06.003 16:28:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.003 16:28:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:06.003 16:28:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:06.003 16:28:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:06.003 16:28:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:06.003 16:28:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.003 16:28:39 -- common/autotest_common.sh@10 -- # set +x 00:17:06.003 nvme0n1 00:17:06.003 16:28:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.003 16:28:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.003 16:28:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:06.003 16:28:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.003 16:28:39 -- common/autotest_common.sh@10 -- # set +x 00:17:06.003 16:28:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.003 16:28:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.003 16:28:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:06.003 16:28:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.003 16:28:39 -- common/autotest_common.sh@10 -- # set +x 00:17:06.003 16:28:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.003 16:28:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:06.003 16:28:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:17:06.003 16:28:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:06.003 16:28:39 -- host/auth.sh@44 -- # digest=sha384 00:17:06.003 16:28:39 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:06.003 16:28:39 -- host/auth.sh@44 -- # keyid=3 00:17:06.003 16:28:39 -- host/auth.sh@45 -- # key=DHHC-1:02:MTZiMGMwNmYwM2NkYjNhOGEzNmI4MTc2Mjg5NTdhNjUzNWZkYTFjN2NiOTM1OWZmpX7ncA==: 00:17:06.003 16:28:39 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:06.003 16:28:39 -- host/auth.sh@48 -- # echo ffdhe2048 00:17:06.003 16:28:39 -- host/auth.sh@49 -- # echo DHHC-1:02:MTZiMGMwNmYwM2NkYjNhOGEzNmI4MTc2Mjg5NTdhNjUzNWZkYTFjN2NiOTM1OWZmpX7ncA==: 00:17:06.003 16:28:39 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:17:06.003 16:28:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:06.003 16:28:39 -- host/auth.sh@68 -- # digest=sha384 00:17:06.003 16:28:39 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:17:06.003 16:28:39 -- host/auth.sh@68 -- # keyid=3 00:17:06.003 16:28:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:06.003 16:28:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.003 16:28:39 -- common/autotest_common.sh@10 -- # set +x 00:17:06.003 16:28:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.003 16:28:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:06.003 16:28:39 -- nvmf/common.sh@717 -- # local ip 00:17:06.003 16:28:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:06.003 16:28:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:06.003 16:28:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.003 16:28:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.003 16:28:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:06.003 16:28:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.003 16:28:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:06.003 16:28:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:06.003 16:28:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:06.003 16:28:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:17:06.003 16:28:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.003 16:28:39 -- common/autotest_common.sh@10 -- # set +x 00:17:06.261 nvme0n1 00:17:06.261 16:28:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.261 16:28:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:06.261 16:28:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.261 16:28:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.261 16:28:40 -- common/autotest_common.sh@10 -- # set +x 00:17:06.261 16:28:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.261 16:28:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.261 16:28:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:06.261 16:28:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.261 16:28:40 -- common/autotest_common.sh@10 -- # set +x 00:17:06.261 16:28:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.261 16:28:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:06.261 16:28:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:17:06.261 16:28:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:06.261 16:28:40 -- host/auth.sh@44 -- # digest=sha384 00:17:06.261 16:28:40 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:06.261 16:28:40 -- host/auth.sh@44 -- # keyid=4 00:17:06.261 16:28:40 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjBjOWQwYjQ2OTNmNDk0Zjg3MzRiMDQ1MWNkYzY2ODY5NDliYmJkNWM1NjQxNjBkZjQyNmY0MjdkZTQyMjFlM6Od0wk=: 00:17:06.261 16:28:40 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:06.261 16:28:40 -- host/auth.sh@48 -- # echo ffdhe2048 00:17:06.261 16:28:40 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjBjOWQwYjQ2OTNmNDk0Zjg3MzRiMDQ1MWNkYzY2ODY5NDliYmJkNWM1NjQxNjBkZjQyNmY0MjdkZTQyMjFlM6Od0wk=: 00:17:06.261 16:28:40 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:17:06.261 16:28:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:06.261 16:28:40 -- host/auth.sh@68 -- # digest=sha384 00:17:06.261 16:28:40 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:17:06.261 16:28:40 -- host/auth.sh@68 -- # keyid=4 00:17:06.261 16:28:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:06.261 16:28:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.261 16:28:40 -- common/autotest_common.sh@10 -- # set +x 00:17:06.261 16:28:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.261 16:28:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:06.261 16:28:40 -- nvmf/common.sh@717 -- # local ip 00:17:06.261 16:28:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:06.261 16:28:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:06.261 16:28:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.261 16:28:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.261 16:28:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:06.261 16:28:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.261 16:28:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:06.261 16:28:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:06.261 16:28:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:06.261 16:28:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:06.261 16:28:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.261 16:28:40 -- common/autotest_common.sh@10 -- # set +x 00:17:06.261 nvme0n1 00:17:06.261 16:28:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.261 16:28:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.261 16:28:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:06.261 16:28:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.261 16:28:40 -- common/autotest_common.sh@10 -- # set +x 00:17:06.261 16:28:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.262 16:28:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.262 16:28:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:06.262 16:28:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.262 16:28:40 -- common/autotest_common.sh@10 -- # set +x 00:17:06.262 16:28:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.262 16:28:40 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:17:06.262 16:28:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:06.262 16:28:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:17:06.262 16:28:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:06.262 16:28:40 -- host/auth.sh@44 -- # digest=sha384 00:17:06.262 16:28:40 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:06.262 16:28:40 -- host/auth.sh@44 -- # keyid=0 00:17:06.262 16:28:40 -- host/auth.sh@45 -- # key=DHHC-1:00:MDcwMmI4YTM0OWE3OTA4OTg2MjhkZGUwNGU0MDNkYWMHRk0g: 00:17:06.262 16:28:40 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:06.262 16:28:40 -- host/auth.sh@48 -- # echo ffdhe3072 00:17:06.262 16:28:40 -- host/auth.sh@49 -- # echo DHHC-1:00:MDcwMmI4YTM0OWE3OTA4OTg2MjhkZGUwNGU0MDNkYWMHRk0g: 00:17:06.262 16:28:40 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:17:06.262 16:28:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:06.262 16:28:40 -- host/auth.sh@68 -- # digest=sha384 00:17:06.262 16:28:40 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:17:06.262 16:28:40 -- host/auth.sh@68 -- # keyid=0 00:17:06.262 16:28:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:06.520 16:28:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.520 16:28:40 -- common/autotest_common.sh@10 -- # set +x 00:17:06.520 16:28:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.520 16:28:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:06.520 16:28:40 -- nvmf/common.sh@717 -- # local ip 00:17:06.520 16:28:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:06.520 16:28:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:06.520 16:28:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.520 16:28:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.520 16:28:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:06.520 16:28:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.520 16:28:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:06.520 16:28:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:06.520 16:28:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:06.520 16:28:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:17:06.520 16:28:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.520 16:28:40 -- common/autotest_common.sh@10 -- # set +x 00:17:06.520 nvme0n1 00:17:06.520 16:28:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.520 16:28:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.520 16:28:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.520 16:28:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:06.520 16:28:40 -- common/autotest_common.sh@10 -- # set +x 00:17:06.520 16:28:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.520 16:28:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.520 16:28:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:06.520 16:28:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.520 16:28:40 -- common/autotest_common.sh@10 -- # set +x 00:17:06.520 16:28:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.520 16:28:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:06.520 16:28:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:17:06.520 16:28:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:06.520 16:28:40 -- host/auth.sh@44 -- # digest=sha384 00:17:06.520 16:28:40 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:06.520 16:28:40 -- host/auth.sh@44 -- # keyid=1 00:17:06.520 16:28:40 -- host/auth.sh@45 -- # key=DHHC-1:00:YzAxZWRkOWRmYWMxN2Q4ZjRjYTRhNjViMTYwNDk3MmZkMjY0YjUzNTVlNzNkNmRhPQxEjg==: 00:17:06.520 16:28:40 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:06.520 16:28:40 -- host/auth.sh@48 -- # echo ffdhe3072 00:17:06.520 16:28:40 -- host/auth.sh@49 -- # echo DHHC-1:00:YzAxZWRkOWRmYWMxN2Q4ZjRjYTRhNjViMTYwNDk3MmZkMjY0YjUzNTVlNzNkNmRhPQxEjg==: 00:17:06.520 16:28:40 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:17:06.520 16:28:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:06.520 16:28:40 -- host/auth.sh@68 -- # digest=sha384 00:17:06.520 16:28:40 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:17:06.520 16:28:40 -- host/auth.sh@68 -- # keyid=1 00:17:06.520 16:28:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:06.520 16:28:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.520 16:28:40 -- common/autotest_common.sh@10 -- # set +x 00:17:06.520 16:28:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.520 16:28:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:06.520 16:28:40 -- nvmf/common.sh@717 -- # local ip 00:17:06.520 16:28:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:06.520 16:28:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:06.520 16:28:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.520 16:28:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.520 16:28:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:06.520 16:28:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.520 16:28:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:06.520 16:28:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:06.520 16:28:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:06.521 16:28:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:17:06.521 16:28:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.521 16:28:40 -- common/autotest_common.sh@10 -- # set +x 00:17:06.780 nvme0n1 00:17:06.780 16:28:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.780 16:28:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.780 16:28:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.780 16:28:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:06.780 16:28:40 -- common/autotest_common.sh@10 -- # set +x 00:17:06.780 16:28:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.780 16:28:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.780 16:28:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:06.780 16:28:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.780 16:28:40 -- common/autotest_common.sh@10 -- # set +x 00:17:06.780 16:28:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.780 16:28:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:06.780 16:28:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:17:06.780 16:28:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:06.780 16:28:40 -- host/auth.sh@44 -- # digest=sha384 00:17:06.780 16:28:40 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:06.780 16:28:40 -- host/auth.sh@44 -- # keyid=2 00:17:06.780 16:28:40 -- host/auth.sh@45 -- # key=DHHC-1:01:YzRlYmMwN2Q5ZDY4NWFkZjEwYzY3MjU0ZTRjOGZiZTjeXngU: 00:17:06.780 16:28:40 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:06.780 16:28:40 -- host/auth.sh@48 -- # echo ffdhe3072 00:17:06.780 16:28:40 -- host/auth.sh@49 -- # echo DHHC-1:01:YzRlYmMwN2Q5ZDY4NWFkZjEwYzY3MjU0ZTRjOGZiZTjeXngU: 00:17:06.780 16:28:40 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:17:06.780 16:28:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:06.780 16:28:40 -- host/auth.sh@68 -- # digest=sha384 00:17:06.780 16:28:40 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:17:06.780 16:28:40 -- host/auth.sh@68 -- # keyid=2 00:17:06.780 16:28:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:06.780 16:28:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.780 16:28:40 -- common/autotest_common.sh@10 -- # set +x 00:17:06.780 16:28:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.780 16:28:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:06.780 16:28:40 -- nvmf/common.sh@717 -- # local ip 00:17:06.780 16:28:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:06.780 16:28:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:06.780 16:28:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.780 16:28:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.780 16:28:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:06.780 16:28:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.780 16:28:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:06.780 16:28:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:06.780 16:28:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:06.780 16:28:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:06.780 16:28:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.780 16:28:40 -- common/autotest_common.sh@10 -- # set +x 00:17:07.039 nvme0n1 00:17:07.039 16:28:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.039 16:28:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.039 16:28:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:07.039 16:28:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.039 16:28:40 -- common/autotest_common.sh@10 -- # set +x 00:17:07.039 16:28:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.039 16:28:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.039 16:28:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:07.039 16:28:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.039 16:28:40 -- common/autotest_common.sh@10 -- # set +x 00:17:07.039 16:28:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.039 16:28:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:07.039 16:28:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:17:07.039 16:28:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:07.039 16:28:40 -- host/auth.sh@44 -- # digest=sha384 00:17:07.039 16:28:40 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:07.039 16:28:40 -- host/auth.sh@44 -- # keyid=3 00:17:07.039 16:28:40 -- host/auth.sh@45 -- # key=DHHC-1:02:MTZiMGMwNmYwM2NkYjNhOGEzNmI4MTc2Mjg5NTdhNjUzNWZkYTFjN2NiOTM1OWZmpX7ncA==: 00:17:07.039 16:28:40 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:07.039 16:28:40 -- host/auth.sh@48 -- # echo ffdhe3072 00:17:07.039 16:28:40 -- host/auth.sh@49 -- # echo DHHC-1:02:MTZiMGMwNmYwM2NkYjNhOGEzNmI4MTc2Mjg5NTdhNjUzNWZkYTFjN2NiOTM1OWZmpX7ncA==: 00:17:07.039 16:28:40 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:17:07.039 16:28:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:07.039 16:28:40 -- host/auth.sh@68 -- # digest=sha384 00:17:07.039 16:28:40 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:17:07.039 16:28:40 -- host/auth.sh@68 -- # keyid=3 00:17:07.039 16:28:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:07.039 16:28:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.039 16:28:40 -- common/autotest_common.sh@10 -- # set +x 00:17:07.039 16:28:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.039 16:28:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:07.039 16:28:40 -- nvmf/common.sh@717 -- # local ip 00:17:07.039 16:28:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:07.039 16:28:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:07.039 16:28:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.039 16:28:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.039 16:28:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:07.039 16:28:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.039 16:28:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:07.039 16:28:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:07.039 16:28:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:07.039 16:28:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:17:07.039 16:28:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.039 16:28:40 -- common/autotest_common.sh@10 -- # set +x 00:17:07.039 nvme0n1 00:17:07.039 16:28:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.039 16:28:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.039 16:28:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.039 16:28:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:07.039 16:28:41 -- common/autotest_common.sh@10 -- # set +x 00:17:07.039 16:28:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.297 16:28:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.297 16:28:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:07.297 16:28:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.297 16:28:41 -- common/autotest_common.sh@10 -- # set +x 00:17:07.297 16:28:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.297 16:28:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:07.297 16:28:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:17:07.297 16:28:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:07.297 16:28:41 -- host/auth.sh@44 -- # digest=sha384 00:17:07.297 16:28:41 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:07.297 16:28:41 -- host/auth.sh@44 -- # keyid=4 00:17:07.297 16:28:41 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjBjOWQwYjQ2OTNmNDk0Zjg3MzRiMDQ1MWNkYzY2ODY5NDliYmJkNWM1NjQxNjBkZjQyNmY0MjdkZTQyMjFlM6Od0wk=: 00:17:07.297 16:28:41 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:07.297 16:28:41 -- host/auth.sh@48 -- # echo ffdhe3072 00:17:07.297 16:28:41 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjBjOWQwYjQ2OTNmNDk0Zjg3MzRiMDQ1MWNkYzY2ODY5NDliYmJkNWM1NjQxNjBkZjQyNmY0MjdkZTQyMjFlM6Od0wk=: 00:17:07.297 16:28:41 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:17:07.297 16:28:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:07.297 16:28:41 -- host/auth.sh@68 -- # digest=sha384 00:17:07.297 16:28:41 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:17:07.297 16:28:41 -- host/auth.sh@68 -- # keyid=4 00:17:07.297 16:28:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:07.297 16:28:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.297 16:28:41 -- common/autotest_common.sh@10 -- # set +x 00:17:07.297 16:28:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.297 16:28:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:07.297 16:28:41 -- nvmf/common.sh@717 -- # local ip 00:17:07.297 16:28:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:07.297 16:28:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:07.297 16:28:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.297 16:28:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.297 16:28:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:07.297 16:28:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.297 16:28:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:07.297 16:28:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:07.297 16:28:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:07.297 16:28:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:07.297 16:28:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.297 16:28:41 -- common/autotest_common.sh@10 -- # set +x 00:17:07.298 nvme0n1 00:17:07.298 16:28:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.298 16:28:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.298 16:28:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.298 16:28:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:07.298 16:28:41 -- common/autotest_common.sh@10 -- # set +x 00:17:07.298 16:28:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.298 16:28:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.298 16:28:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:07.298 16:28:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.298 16:28:41 -- common/autotest_common.sh@10 -- # set +x 00:17:07.298 16:28:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.298 16:28:41 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:17:07.298 16:28:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:07.298 16:28:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:17:07.298 16:28:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:07.298 16:28:41 -- host/auth.sh@44 -- # digest=sha384 00:17:07.298 16:28:41 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:07.298 16:28:41 -- host/auth.sh@44 -- # keyid=0 00:17:07.298 16:28:41 -- host/auth.sh@45 -- # key=DHHC-1:00:MDcwMmI4YTM0OWE3OTA4OTg2MjhkZGUwNGU0MDNkYWMHRk0g: 00:17:07.298 16:28:41 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:07.298 16:28:41 -- host/auth.sh@48 -- # echo ffdhe4096 00:17:07.298 16:28:41 -- host/auth.sh@49 -- # echo DHHC-1:00:MDcwMmI4YTM0OWE3OTA4OTg2MjhkZGUwNGU0MDNkYWMHRk0g: 00:17:07.298 16:28:41 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:17:07.298 16:28:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:07.298 16:28:41 -- host/auth.sh@68 -- # digest=sha384 00:17:07.298 16:28:41 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:17:07.298 16:28:41 -- host/auth.sh@68 -- # keyid=0 00:17:07.298 16:28:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:07.298 16:28:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.298 16:28:41 -- common/autotest_common.sh@10 -- # set +x 00:17:07.556 16:28:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.556 16:28:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:07.556 16:28:41 -- nvmf/common.sh@717 -- # local ip 00:17:07.556 16:28:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:07.556 16:28:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:07.556 16:28:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.556 16:28:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.556 16:28:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:07.556 16:28:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.556 16:28:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:07.556 16:28:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:07.556 16:28:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:07.556 16:28:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:17:07.556 16:28:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.556 16:28:41 -- common/autotest_common.sh@10 -- # set +x 00:17:07.556 nvme0n1 00:17:07.556 16:28:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.556 16:28:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.556 16:28:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.556 16:28:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:07.556 16:28:41 -- common/autotest_common.sh@10 -- # set +x 00:17:07.556 16:28:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.556 16:28:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.556 16:28:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:07.556 16:28:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.556 16:28:41 -- common/autotest_common.sh@10 -- # set +x 00:17:07.556 16:28:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.556 16:28:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:07.556 16:28:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:17:07.556 16:28:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:07.556 16:28:41 -- host/auth.sh@44 -- # digest=sha384 00:17:07.556 16:28:41 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:07.556 16:28:41 -- host/auth.sh@44 -- # keyid=1 00:17:07.556 16:28:41 -- host/auth.sh@45 -- # key=DHHC-1:00:YzAxZWRkOWRmYWMxN2Q4ZjRjYTRhNjViMTYwNDk3MmZkMjY0YjUzNTVlNzNkNmRhPQxEjg==: 00:17:07.556 16:28:41 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:07.815 16:28:41 -- host/auth.sh@48 -- # echo ffdhe4096 00:17:07.815 16:28:41 -- host/auth.sh@49 -- # echo DHHC-1:00:YzAxZWRkOWRmYWMxN2Q4ZjRjYTRhNjViMTYwNDk3MmZkMjY0YjUzNTVlNzNkNmRhPQxEjg==: 00:17:07.815 16:28:41 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:17:07.815 16:28:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:07.815 16:28:41 -- host/auth.sh@68 -- # digest=sha384 00:17:07.815 16:28:41 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:17:07.815 16:28:41 -- host/auth.sh@68 -- # keyid=1 00:17:07.815 16:28:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:07.815 16:28:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.815 16:28:41 -- common/autotest_common.sh@10 -- # set +x 00:17:07.815 16:28:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.815 16:28:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:07.815 16:28:41 -- nvmf/common.sh@717 -- # local ip 00:17:07.815 16:28:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:07.815 16:28:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:07.815 16:28:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.815 16:28:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.815 16:28:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:07.815 16:28:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.815 16:28:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:07.815 16:28:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:07.815 16:28:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:07.815 16:28:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:17:07.815 16:28:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.815 16:28:41 -- common/autotest_common.sh@10 -- # set +x 00:17:07.815 nvme0n1 00:17:07.815 16:28:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.815 16:28:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.815 16:28:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.815 16:28:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:07.815 16:28:41 -- common/autotest_common.sh@10 -- # set +x 00:17:07.815 16:28:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.815 16:28:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.815 16:28:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:07.815 16:28:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.815 16:28:41 -- common/autotest_common.sh@10 -- # set +x 00:17:08.073 16:28:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:08.073 16:28:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:08.073 16:28:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:17:08.073 16:28:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:08.073 16:28:41 -- host/auth.sh@44 -- # digest=sha384 00:17:08.073 16:28:41 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:08.073 16:28:41 -- host/auth.sh@44 -- # keyid=2 00:17:08.073 16:28:41 -- host/auth.sh@45 -- # key=DHHC-1:01:YzRlYmMwN2Q5ZDY4NWFkZjEwYzY3MjU0ZTRjOGZiZTjeXngU: 00:17:08.073 16:28:41 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:08.073 16:28:41 -- host/auth.sh@48 -- # echo ffdhe4096 00:17:08.073 16:28:41 -- host/auth.sh@49 -- # echo DHHC-1:01:YzRlYmMwN2Q5ZDY4NWFkZjEwYzY3MjU0ZTRjOGZiZTjeXngU: 00:17:08.073 16:28:41 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:17:08.073 16:28:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:08.073 16:28:41 -- host/auth.sh@68 -- # digest=sha384 00:17:08.073 16:28:41 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:17:08.073 16:28:41 -- host/auth.sh@68 -- # keyid=2 00:17:08.073 16:28:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:08.073 16:28:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.073 16:28:41 -- common/autotest_common.sh@10 -- # set +x 00:17:08.073 16:28:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:08.073 16:28:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:08.073 16:28:41 -- nvmf/common.sh@717 -- # local ip 00:17:08.073 16:28:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:08.073 16:28:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:08.073 16:28:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.073 16:28:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.073 16:28:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:08.073 16:28:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.073 16:28:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:08.073 16:28:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:08.073 16:28:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:08.073 16:28:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:08.073 16:28:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.073 16:28:41 -- common/autotest_common.sh@10 -- # set +x 00:17:08.073 nvme0n1 00:17:08.073 16:28:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:08.073 16:28:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:08.073 16:28:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.073 16:28:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.073 16:28:42 -- common/autotest_common.sh@10 -- # set +x 00:17:08.073 16:28:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:08.073 16:28:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.073 16:28:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.073 16:28:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.073 16:28:42 -- common/autotest_common.sh@10 -- # set +x 00:17:08.331 16:28:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:08.331 16:28:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:08.331 16:28:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:17:08.331 16:28:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:08.331 16:28:42 -- host/auth.sh@44 -- # digest=sha384 00:17:08.331 16:28:42 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:08.331 16:28:42 -- host/auth.sh@44 -- # keyid=3 00:17:08.331 16:28:42 -- host/auth.sh@45 -- # key=DHHC-1:02:MTZiMGMwNmYwM2NkYjNhOGEzNmI4MTc2Mjg5NTdhNjUzNWZkYTFjN2NiOTM1OWZmpX7ncA==: 00:17:08.331 16:28:42 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:08.331 16:28:42 -- host/auth.sh@48 -- # echo ffdhe4096 00:17:08.331 16:28:42 -- host/auth.sh@49 -- # echo DHHC-1:02:MTZiMGMwNmYwM2NkYjNhOGEzNmI4MTc2Mjg5NTdhNjUzNWZkYTFjN2NiOTM1OWZmpX7ncA==: 00:17:08.331 16:28:42 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:17:08.331 16:28:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:08.331 16:28:42 -- host/auth.sh@68 -- # digest=sha384 00:17:08.331 16:28:42 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:17:08.331 16:28:42 -- host/auth.sh@68 -- # keyid=3 00:17:08.331 16:28:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:08.331 16:28:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.331 16:28:42 -- common/autotest_common.sh@10 -- # set +x 00:17:08.331 16:28:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:08.331 16:28:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:08.331 16:28:42 -- nvmf/common.sh@717 -- # local ip 00:17:08.331 16:28:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:08.331 16:28:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:08.331 16:28:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.331 16:28:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.331 16:28:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:08.331 16:28:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.331 16:28:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:08.331 16:28:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:08.331 16:28:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:08.331 16:28:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:17:08.331 16:28:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.331 16:28:42 -- common/autotest_common.sh@10 -- # set +x 00:17:08.331 nvme0n1 00:17:08.331 16:28:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:08.331 16:28:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.331 16:28:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:08.331 16:28:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.331 16:28:42 -- common/autotest_common.sh@10 -- # set +x 00:17:08.331 16:28:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:08.589 16:28:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.589 16:28:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.589 16:28:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.589 16:28:42 -- common/autotest_common.sh@10 -- # set +x 00:17:08.589 16:28:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:08.589 16:28:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:08.589 16:28:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:17:08.589 16:28:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:08.589 16:28:42 -- host/auth.sh@44 -- # digest=sha384 00:17:08.589 16:28:42 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:08.589 16:28:42 -- host/auth.sh@44 -- # keyid=4 00:17:08.589 16:28:42 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjBjOWQwYjQ2OTNmNDk0Zjg3MzRiMDQ1MWNkYzY2ODY5NDliYmJkNWM1NjQxNjBkZjQyNmY0MjdkZTQyMjFlM6Od0wk=: 00:17:08.589 16:28:42 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:08.589 16:28:42 -- host/auth.sh@48 -- # echo ffdhe4096 00:17:08.589 16:28:42 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjBjOWQwYjQ2OTNmNDk0Zjg3MzRiMDQ1MWNkYzY2ODY5NDliYmJkNWM1NjQxNjBkZjQyNmY0MjdkZTQyMjFlM6Od0wk=: 00:17:08.589 16:28:42 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:17:08.589 16:28:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:08.589 16:28:42 -- host/auth.sh@68 -- # digest=sha384 00:17:08.589 16:28:42 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:17:08.589 16:28:42 -- host/auth.sh@68 -- # keyid=4 00:17:08.589 16:28:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:08.589 16:28:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.589 16:28:42 -- common/autotest_common.sh@10 -- # set +x 00:17:08.589 16:28:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:08.589 16:28:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:08.589 16:28:42 -- nvmf/common.sh@717 -- # local ip 00:17:08.589 16:28:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:08.589 16:28:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:08.589 16:28:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.589 16:28:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.589 16:28:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:08.589 16:28:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.589 16:28:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:08.589 16:28:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:08.589 16:28:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:08.589 16:28:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:08.589 16:28:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.589 16:28:42 -- common/autotest_common.sh@10 -- # set +x 00:17:08.589 nvme0n1 00:17:08.589 16:28:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:08.589 16:28:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.589 16:28:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:08.589 16:28:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.589 16:28:42 -- common/autotest_common.sh@10 -- # set +x 00:17:08.589 16:28:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:08.847 16:28:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.847 16:28:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.847 16:28:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.847 16:28:42 -- common/autotest_common.sh@10 -- # set +x 00:17:08.847 16:28:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:08.847 16:28:42 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:17:08.847 16:28:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:08.847 16:28:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:17:08.847 16:28:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:08.847 16:28:42 -- host/auth.sh@44 -- # digest=sha384 00:17:08.847 16:28:42 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:08.847 16:28:42 -- host/auth.sh@44 -- # keyid=0 00:17:08.847 16:28:42 -- host/auth.sh@45 -- # key=DHHC-1:00:MDcwMmI4YTM0OWE3OTA4OTg2MjhkZGUwNGU0MDNkYWMHRk0g: 00:17:08.847 16:28:42 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:08.847 16:28:42 -- host/auth.sh@48 -- # echo ffdhe6144 00:17:08.847 16:28:42 -- host/auth.sh@49 -- # echo DHHC-1:00:MDcwMmI4YTM0OWE3OTA4OTg2MjhkZGUwNGU0MDNkYWMHRk0g: 00:17:08.847 16:28:42 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:17:08.847 16:28:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:08.847 16:28:42 -- host/auth.sh@68 -- # digest=sha384 00:17:08.847 16:28:42 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:17:08.847 16:28:42 -- host/auth.sh@68 -- # keyid=0 00:17:08.847 16:28:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:08.847 16:28:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.847 16:28:42 -- common/autotest_common.sh@10 -- # set +x 00:17:08.847 16:28:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:08.847 16:28:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:08.847 16:28:42 -- nvmf/common.sh@717 -- # local ip 00:17:08.847 16:28:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:08.847 16:28:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:08.847 16:28:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.847 16:28:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.847 16:28:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:08.847 16:28:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.847 16:28:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:08.847 16:28:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:08.847 16:28:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:08.847 16:28:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:17:08.847 16:28:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.847 16:28:42 -- common/autotest_common.sh@10 -- # set +x 00:17:09.105 nvme0n1 00:17:09.105 16:28:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.105 16:28:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.105 16:28:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.105 16:28:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:09.105 16:28:43 -- common/autotest_common.sh@10 -- # set +x 00:17:09.105 16:28:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.105 16:28:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.105 16:28:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.105 16:28:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.105 16:28:43 -- common/autotest_common.sh@10 -- # set +x 00:17:09.105 16:28:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.105 16:28:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:09.105 16:28:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:17:09.105 16:28:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:09.105 16:28:43 -- host/auth.sh@44 -- # digest=sha384 00:17:09.105 16:28:43 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:09.105 16:28:43 -- host/auth.sh@44 -- # keyid=1 00:17:09.105 16:28:43 -- host/auth.sh@45 -- # key=DHHC-1:00:YzAxZWRkOWRmYWMxN2Q4ZjRjYTRhNjViMTYwNDk3MmZkMjY0YjUzNTVlNzNkNmRhPQxEjg==: 00:17:09.105 16:28:43 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:09.105 16:28:43 -- host/auth.sh@48 -- # echo ffdhe6144 00:17:09.105 16:28:43 -- host/auth.sh@49 -- # echo DHHC-1:00:YzAxZWRkOWRmYWMxN2Q4ZjRjYTRhNjViMTYwNDk3MmZkMjY0YjUzNTVlNzNkNmRhPQxEjg==: 00:17:09.105 16:28:43 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:17:09.105 16:28:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:09.105 16:28:43 -- host/auth.sh@68 -- # digest=sha384 00:17:09.105 16:28:43 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:17:09.105 16:28:43 -- host/auth.sh@68 -- # keyid=1 00:17:09.105 16:28:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:09.105 16:28:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.105 16:28:43 -- common/autotest_common.sh@10 -- # set +x 00:17:09.105 16:28:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.105 16:28:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:09.105 16:28:43 -- nvmf/common.sh@717 -- # local ip 00:17:09.105 16:28:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:09.105 16:28:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:09.105 16:28:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.105 16:28:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.105 16:28:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:09.105 16:28:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.105 16:28:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:09.105 16:28:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:09.105 16:28:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:09.105 16:28:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:17:09.105 16:28:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.105 16:28:43 -- common/autotest_common.sh@10 -- # set +x 00:17:09.672 nvme0n1 00:17:09.672 16:28:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.672 16:28:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.672 16:28:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.672 16:28:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:09.672 16:28:43 -- common/autotest_common.sh@10 -- # set +x 00:17:09.672 16:28:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.672 16:28:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.672 16:28:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.672 16:28:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.672 16:28:43 -- common/autotest_common.sh@10 -- # set +x 00:17:09.672 16:28:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.672 16:28:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:09.672 16:28:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:17:09.672 16:28:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:09.672 16:28:43 -- host/auth.sh@44 -- # digest=sha384 00:17:09.672 16:28:43 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:09.672 16:28:43 -- host/auth.sh@44 -- # keyid=2 00:17:09.672 16:28:43 -- host/auth.sh@45 -- # key=DHHC-1:01:YzRlYmMwN2Q5ZDY4NWFkZjEwYzY3MjU0ZTRjOGZiZTjeXngU: 00:17:09.672 16:28:43 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:09.672 16:28:43 -- host/auth.sh@48 -- # echo ffdhe6144 00:17:09.672 16:28:43 -- host/auth.sh@49 -- # echo DHHC-1:01:YzRlYmMwN2Q5ZDY4NWFkZjEwYzY3MjU0ZTRjOGZiZTjeXngU: 00:17:09.672 16:28:43 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:17:09.672 16:28:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:09.672 16:28:43 -- host/auth.sh@68 -- # digest=sha384 00:17:09.672 16:28:43 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:17:09.672 16:28:43 -- host/auth.sh@68 -- # keyid=2 00:17:09.672 16:28:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:09.672 16:28:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.672 16:28:43 -- common/autotest_common.sh@10 -- # set +x 00:17:09.672 16:28:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.672 16:28:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:09.672 16:28:43 -- nvmf/common.sh@717 -- # local ip 00:17:09.672 16:28:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:09.672 16:28:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:09.672 16:28:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.672 16:28:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.672 16:28:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:09.672 16:28:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.672 16:28:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:09.672 16:28:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:09.672 16:28:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:09.672 16:28:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:09.672 16:28:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.672 16:28:43 -- common/autotest_common.sh@10 -- # set +x 00:17:09.931 nvme0n1 00:17:09.931 16:28:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.931 16:28:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:09.931 16:28:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.931 16:28:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.931 16:28:43 -- common/autotest_common.sh@10 -- # set +x 00:17:09.931 16:28:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:10.190 16:28:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.190 16:28:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:10.190 16:28:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:10.190 16:28:43 -- common/autotest_common.sh@10 -- # set +x 00:17:10.190 16:28:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:10.190 16:28:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:10.190 16:28:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:17:10.190 16:28:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:10.190 16:28:43 -- host/auth.sh@44 -- # digest=sha384 00:17:10.190 16:28:43 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:10.190 16:28:43 -- host/auth.sh@44 -- # keyid=3 00:17:10.190 16:28:43 -- host/auth.sh@45 -- # key=DHHC-1:02:MTZiMGMwNmYwM2NkYjNhOGEzNmI4MTc2Mjg5NTdhNjUzNWZkYTFjN2NiOTM1OWZmpX7ncA==: 00:17:10.190 16:28:43 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:10.190 16:28:43 -- host/auth.sh@48 -- # echo ffdhe6144 00:17:10.190 16:28:43 -- host/auth.sh@49 -- # echo DHHC-1:02:MTZiMGMwNmYwM2NkYjNhOGEzNmI4MTc2Mjg5NTdhNjUzNWZkYTFjN2NiOTM1OWZmpX7ncA==: 00:17:10.190 16:28:43 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:17:10.190 16:28:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:10.190 16:28:43 -- host/auth.sh@68 -- # digest=sha384 00:17:10.190 16:28:43 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:17:10.190 16:28:43 -- host/auth.sh@68 -- # keyid=3 00:17:10.190 16:28:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:10.190 16:28:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:10.190 16:28:43 -- common/autotest_common.sh@10 -- # set +x 00:17:10.190 16:28:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:10.190 16:28:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:10.190 16:28:44 -- nvmf/common.sh@717 -- # local ip 00:17:10.190 16:28:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:10.190 16:28:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:10.190 16:28:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:10.190 16:28:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:10.190 16:28:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:10.190 16:28:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:10.190 16:28:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:10.190 16:28:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:10.190 16:28:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:10.190 16:28:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:17:10.190 16:28:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:10.190 16:28:44 -- common/autotest_common.sh@10 -- # set +x 00:17:10.447 nvme0n1 00:17:10.447 16:28:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:10.447 16:28:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:10.447 16:28:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:10.447 16:28:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:10.447 16:28:44 -- common/autotest_common.sh@10 -- # set +x 00:17:10.447 16:28:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:10.447 16:28:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.447 16:28:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:10.447 16:28:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:10.447 16:28:44 -- common/autotest_common.sh@10 -- # set +x 00:17:10.447 16:28:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:10.447 16:28:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:10.447 16:28:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:17:10.447 16:28:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:10.447 16:28:44 -- host/auth.sh@44 -- # digest=sha384 00:17:10.447 16:28:44 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:10.447 16:28:44 -- host/auth.sh@44 -- # keyid=4 00:17:10.447 16:28:44 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjBjOWQwYjQ2OTNmNDk0Zjg3MzRiMDQ1MWNkYzY2ODY5NDliYmJkNWM1NjQxNjBkZjQyNmY0MjdkZTQyMjFlM6Od0wk=: 00:17:10.447 16:28:44 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:10.447 16:28:44 -- host/auth.sh@48 -- # echo ffdhe6144 00:17:10.447 16:28:44 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjBjOWQwYjQ2OTNmNDk0Zjg3MzRiMDQ1MWNkYzY2ODY5NDliYmJkNWM1NjQxNjBkZjQyNmY0MjdkZTQyMjFlM6Od0wk=: 00:17:10.447 16:28:44 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:17:10.447 16:28:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:10.447 16:28:44 -- host/auth.sh@68 -- # digest=sha384 00:17:10.447 16:28:44 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:17:10.447 16:28:44 -- host/auth.sh@68 -- # keyid=4 00:17:10.447 16:28:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:10.447 16:28:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:10.447 16:28:44 -- common/autotest_common.sh@10 -- # set +x 00:17:10.447 16:28:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:10.447 16:28:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:10.447 16:28:44 -- nvmf/common.sh@717 -- # local ip 00:17:10.447 16:28:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:10.447 16:28:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:10.447 16:28:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:10.447 16:28:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:10.447 16:28:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:10.447 16:28:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:10.447 16:28:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:10.447 16:28:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:10.447 16:28:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:10.447 16:28:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:10.447 16:28:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:10.447 16:28:44 -- common/autotest_common.sh@10 -- # set +x 00:17:11.061 nvme0n1 00:17:11.061 16:28:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.061 16:28:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.061 16:28:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:11.061 16:28:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.061 16:28:44 -- common/autotest_common.sh@10 -- # set +x 00:17:11.061 16:28:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.061 16:28:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.061 16:28:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:11.061 16:28:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.061 16:28:44 -- common/autotest_common.sh@10 -- # set +x 00:17:11.061 16:28:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.061 16:28:44 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:17:11.061 16:28:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:11.061 16:28:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:17:11.061 16:28:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:11.061 16:28:44 -- host/auth.sh@44 -- # digest=sha384 00:17:11.061 16:28:44 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:11.061 16:28:44 -- host/auth.sh@44 -- # keyid=0 00:17:11.061 16:28:44 -- host/auth.sh@45 -- # key=DHHC-1:00:MDcwMmI4YTM0OWE3OTA4OTg2MjhkZGUwNGU0MDNkYWMHRk0g: 00:17:11.061 16:28:44 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:11.061 16:28:44 -- host/auth.sh@48 -- # echo ffdhe8192 00:17:11.061 16:28:44 -- host/auth.sh@49 -- # echo DHHC-1:00:MDcwMmI4YTM0OWE3OTA4OTg2MjhkZGUwNGU0MDNkYWMHRk0g: 00:17:11.061 16:28:44 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:17:11.061 16:28:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:11.061 16:28:44 -- host/auth.sh@68 -- # digest=sha384 00:17:11.061 16:28:44 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:17:11.061 16:28:44 -- host/auth.sh@68 -- # keyid=0 00:17:11.061 16:28:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:11.061 16:28:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.061 16:28:44 -- common/autotest_common.sh@10 -- # set +x 00:17:11.061 16:28:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.061 16:28:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:11.061 16:28:44 -- nvmf/common.sh@717 -- # local ip 00:17:11.061 16:28:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:11.061 16:28:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:11.061 16:28:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.061 16:28:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.061 16:28:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:11.061 16:28:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.061 16:28:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:11.061 16:28:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:11.061 16:28:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:11.061 16:28:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:17:11.061 16:28:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.061 16:28:44 -- common/autotest_common.sh@10 -- # set +x 00:17:11.628 nvme0n1 00:17:11.628 16:28:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.628 16:28:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.628 16:28:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.628 16:28:45 -- common/autotest_common.sh@10 -- # set +x 00:17:11.628 16:28:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:11.628 16:28:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.628 16:28:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.628 16:28:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:11.628 16:28:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.628 16:28:45 -- common/autotest_common.sh@10 -- # set +x 00:17:11.628 16:28:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.628 16:28:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:11.628 16:28:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:17:11.628 16:28:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:11.628 16:28:45 -- host/auth.sh@44 -- # digest=sha384 00:17:11.628 16:28:45 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:11.628 16:28:45 -- host/auth.sh@44 -- # keyid=1 00:17:11.628 16:28:45 -- host/auth.sh@45 -- # key=DHHC-1:00:YzAxZWRkOWRmYWMxN2Q4ZjRjYTRhNjViMTYwNDk3MmZkMjY0YjUzNTVlNzNkNmRhPQxEjg==: 00:17:11.628 16:28:45 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:11.628 16:28:45 -- host/auth.sh@48 -- # echo ffdhe8192 00:17:11.628 16:28:45 -- host/auth.sh@49 -- # echo DHHC-1:00:YzAxZWRkOWRmYWMxN2Q4ZjRjYTRhNjViMTYwNDk3MmZkMjY0YjUzNTVlNzNkNmRhPQxEjg==: 00:17:11.628 16:28:45 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:17:11.628 16:28:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:11.628 16:28:45 -- host/auth.sh@68 -- # digest=sha384 00:17:11.628 16:28:45 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:17:11.628 16:28:45 -- host/auth.sh@68 -- # keyid=1 00:17:11.628 16:28:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:11.628 16:28:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.628 16:28:45 -- common/autotest_common.sh@10 -- # set +x 00:17:11.628 16:28:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.628 16:28:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:11.628 16:28:45 -- nvmf/common.sh@717 -- # local ip 00:17:11.628 16:28:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:11.628 16:28:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:11.628 16:28:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.628 16:28:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.628 16:28:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:11.628 16:28:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.628 16:28:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:11.628 16:28:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:11.628 16:28:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:11.628 16:28:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:17:11.628 16:28:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.628 16:28:45 -- common/autotest_common.sh@10 -- # set +x 00:17:12.562 nvme0n1 00:17:12.562 16:28:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:12.562 16:28:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:12.562 16:28:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:12.562 16:28:46 -- common/autotest_common.sh@10 -- # set +x 00:17:12.562 16:28:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:12.562 16:28:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:12.562 16:28:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.562 16:28:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:12.563 16:28:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:12.563 16:28:46 -- common/autotest_common.sh@10 -- # set +x 00:17:12.563 16:28:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:12.563 16:28:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:12.563 16:28:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:17:12.563 16:28:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:12.563 16:28:46 -- host/auth.sh@44 -- # digest=sha384 00:17:12.563 16:28:46 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:12.563 16:28:46 -- host/auth.sh@44 -- # keyid=2 00:17:12.563 16:28:46 -- host/auth.sh@45 -- # key=DHHC-1:01:YzRlYmMwN2Q5ZDY4NWFkZjEwYzY3MjU0ZTRjOGZiZTjeXngU: 00:17:12.563 16:28:46 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:12.563 16:28:46 -- host/auth.sh@48 -- # echo ffdhe8192 00:17:12.563 16:28:46 -- host/auth.sh@49 -- # echo DHHC-1:01:YzRlYmMwN2Q5ZDY4NWFkZjEwYzY3MjU0ZTRjOGZiZTjeXngU: 00:17:12.563 16:28:46 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:17:12.563 16:28:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:12.563 16:28:46 -- host/auth.sh@68 -- # digest=sha384 00:17:12.563 16:28:46 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:17:12.563 16:28:46 -- host/auth.sh@68 -- # keyid=2 00:17:12.563 16:28:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:12.563 16:28:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:12.563 16:28:46 -- common/autotest_common.sh@10 -- # set +x 00:17:12.563 16:28:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:12.563 16:28:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:12.563 16:28:46 -- nvmf/common.sh@717 -- # local ip 00:17:12.563 16:28:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:12.563 16:28:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:12.563 16:28:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:12.563 16:28:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:12.563 16:28:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:12.563 16:28:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:12.563 16:28:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:12.563 16:28:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:12.563 16:28:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:12.563 16:28:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:12.563 16:28:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:12.563 16:28:46 -- common/autotest_common.sh@10 -- # set +x 00:17:13.129 nvme0n1 00:17:13.129 16:28:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:13.129 16:28:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:13.129 16:28:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:13.129 16:28:47 -- common/autotest_common.sh@10 -- # set +x 00:17:13.129 16:28:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:13.129 16:28:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:13.129 16:28:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.129 16:28:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:13.129 16:28:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:13.129 16:28:47 -- common/autotest_common.sh@10 -- # set +x 00:17:13.129 16:28:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:13.129 16:28:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:13.129 16:28:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:17:13.129 16:28:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:13.129 16:28:47 -- host/auth.sh@44 -- # digest=sha384 00:17:13.129 16:28:47 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:13.129 16:28:47 -- host/auth.sh@44 -- # keyid=3 00:17:13.129 16:28:47 -- host/auth.sh@45 -- # key=DHHC-1:02:MTZiMGMwNmYwM2NkYjNhOGEzNmI4MTc2Mjg5NTdhNjUzNWZkYTFjN2NiOTM1OWZmpX7ncA==: 00:17:13.129 16:28:47 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:13.129 16:28:47 -- host/auth.sh@48 -- # echo ffdhe8192 00:17:13.129 16:28:47 -- host/auth.sh@49 -- # echo DHHC-1:02:MTZiMGMwNmYwM2NkYjNhOGEzNmI4MTc2Mjg5NTdhNjUzNWZkYTFjN2NiOTM1OWZmpX7ncA==: 00:17:13.129 16:28:47 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:17:13.129 16:28:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:13.129 16:28:47 -- host/auth.sh@68 -- # digest=sha384 00:17:13.129 16:28:47 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:17:13.129 16:28:47 -- host/auth.sh@68 -- # keyid=3 00:17:13.129 16:28:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:13.129 16:28:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:13.129 16:28:47 -- common/autotest_common.sh@10 -- # set +x 00:17:13.129 16:28:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:13.130 16:28:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:13.130 16:28:47 -- nvmf/common.sh@717 -- # local ip 00:17:13.130 16:28:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:13.130 16:28:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:13.130 16:28:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:13.130 16:28:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:13.130 16:28:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:13.130 16:28:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:13.130 16:28:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:13.130 16:28:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:13.130 16:28:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:13.130 16:28:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:17:13.130 16:28:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:13.130 16:28:47 -- common/autotest_common.sh@10 -- # set +x 00:17:14.067 nvme0n1 00:17:14.067 16:28:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:14.067 16:28:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:14.067 16:28:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:14.067 16:28:47 -- common/autotest_common.sh@10 -- # set +x 00:17:14.067 16:28:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:14.067 16:28:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:14.067 16:28:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.067 16:28:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:14.067 16:28:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:14.067 16:28:47 -- common/autotest_common.sh@10 -- # set +x 00:17:14.067 16:28:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:14.067 16:28:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:14.067 16:28:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:17:14.067 16:28:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:14.067 16:28:47 -- host/auth.sh@44 -- # digest=sha384 00:17:14.067 16:28:47 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:14.067 16:28:47 -- host/auth.sh@44 -- # keyid=4 00:17:14.067 16:28:47 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjBjOWQwYjQ2OTNmNDk0Zjg3MzRiMDQ1MWNkYzY2ODY5NDliYmJkNWM1NjQxNjBkZjQyNmY0MjdkZTQyMjFlM6Od0wk=: 00:17:14.067 16:28:47 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:14.067 16:28:47 -- host/auth.sh@48 -- # echo ffdhe8192 00:17:14.067 16:28:47 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjBjOWQwYjQ2OTNmNDk0Zjg3MzRiMDQ1MWNkYzY2ODY5NDliYmJkNWM1NjQxNjBkZjQyNmY0MjdkZTQyMjFlM6Od0wk=: 00:17:14.067 16:28:47 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:17:14.067 16:28:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:14.067 16:28:47 -- host/auth.sh@68 -- # digest=sha384 00:17:14.067 16:28:47 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:17:14.067 16:28:47 -- host/auth.sh@68 -- # keyid=4 00:17:14.067 16:28:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:14.067 16:28:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:14.067 16:28:47 -- common/autotest_common.sh@10 -- # set +x 00:17:14.067 16:28:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:14.067 16:28:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:14.067 16:28:47 -- nvmf/common.sh@717 -- # local ip 00:17:14.067 16:28:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:14.067 16:28:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:14.067 16:28:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:14.067 16:28:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:14.067 16:28:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:14.067 16:28:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:14.067 16:28:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:14.067 16:28:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:14.067 16:28:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:14.067 16:28:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:14.067 16:28:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:14.067 16:28:47 -- common/autotest_common.sh@10 -- # set +x 00:17:14.632 nvme0n1 00:17:14.632 16:28:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:14.632 16:28:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:14.632 16:28:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:14.632 16:28:48 -- common/autotest_common.sh@10 -- # set +x 00:17:14.632 16:28:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:14.632 16:28:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:14.632 16:28:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.632 16:28:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:14.632 16:28:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:14.632 16:28:48 -- common/autotest_common.sh@10 -- # set +x 00:17:14.632 16:28:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:14.632 16:28:48 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:17:14.632 16:28:48 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:17:14.632 16:28:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:14.632 16:28:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:17:14.632 16:28:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:14.632 16:28:48 -- host/auth.sh@44 -- # digest=sha512 00:17:14.632 16:28:48 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:14.632 16:28:48 -- host/auth.sh@44 -- # keyid=0 00:17:14.632 16:28:48 -- host/auth.sh@45 -- # key=DHHC-1:00:MDcwMmI4YTM0OWE3OTA4OTg2MjhkZGUwNGU0MDNkYWMHRk0g: 00:17:14.632 16:28:48 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:14.632 16:28:48 -- host/auth.sh@48 -- # echo ffdhe2048 00:17:14.632 16:28:48 -- host/auth.sh@49 -- # echo DHHC-1:00:MDcwMmI4YTM0OWE3OTA4OTg2MjhkZGUwNGU0MDNkYWMHRk0g: 00:17:14.632 16:28:48 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:17:14.632 16:28:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:14.632 16:28:48 -- host/auth.sh@68 -- # digest=sha512 00:17:14.632 16:28:48 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:17:14.632 16:28:48 -- host/auth.sh@68 -- # keyid=0 00:17:14.632 16:28:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:14.632 16:28:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:14.632 16:28:48 -- common/autotest_common.sh@10 -- # set +x 00:17:14.632 16:28:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:14.632 16:28:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:14.632 16:28:48 -- nvmf/common.sh@717 -- # local ip 00:17:14.632 16:28:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:14.632 16:28:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:14.632 16:28:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:14.632 16:28:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:14.632 16:28:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:14.632 16:28:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:14.632 16:28:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:14.632 16:28:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:14.632 16:28:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:14.632 16:28:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:17:14.632 16:28:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:14.632 16:28:48 -- common/autotest_common.sh@10 -- # set +x 00:17:14.632 nvme0n1 00:17:14.632 16:28:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:14.632 16:28:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:14.632 16:28:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:14.632 16:28:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:14.632 16:28:48 -- common/autotest_common.sh@10 -- # set +x 00:17:14.632 16:28:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:14.890 16:28:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.890 16:28:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:14.890 16:28:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:14.890 16:28:48 -- common/autotest_common.sh@10 -- # set +x 00:17:14.890 16:28:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:14.890 16:28:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:14.890 16:28:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:17:14.890 16:28:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:14.890 16:28:48 -- host/auth.sh@44 -- # digest=sha512 00:17:14.890 16:28:48 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:14.890 16:28:48 -- host/auth.sh@44 -- # keyid=1 00:17:14.890 16:28:48 -- host/auth.sh@45 -- # key=DHHC-1:00:YzAxZWRkOWRmYWMxN2Q4ZjRjYTRhNjViMTYwNDk3MmZkMjY0YjUzNTVlNzNkNmRhPQxEjg==: 00:17:14.890 16:28:48 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:14.890 16:28:48 -- host/auth.sh@48 -- # echo ffdhe2048 00:17:14.890 16:28:48 -- host/auth.sh@49 -- # echo DHHC-1:00:YzAxZWRkOWRmYWMxN2Q4ZjRjYTRhNjViMTYwNDk3MmZkMjY0YjUzNTVlNzNkNmRhPQxEjg==: 00:17:14.890 16:28:48 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:17:14.890 16:28:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:14.890 16:28:48 -- host/auth.sh@68 -- # digest=sha512 00:17:14.890 16:28:48 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:17:14.890 16:28:48 -- host/auth.sh@68 -- # keyid=1 00:17:14.890 16:28:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:14.890 16:28:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:14.890 16:28:48 -- common/autotest_common.sh@10 -- # set +x 00:17:14.890 16:28:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:14.890 16:28:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:14.890 16:28:48 -- nvmf/common.sh@717 -- # local ip 00:17:14.890 16:28:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:14.890 16:28:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:14.890 16:28:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:14.890 16:28:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:14.890 16:28:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:14.890 16:28:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:14.890 16:28:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:14.890 16:28:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:14.890 16:28:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:14.890 16:28:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:17:14.890 16:28:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:14.890 16:28:48 -- common/autotest_common.sh@10 -- # set +x 00:17:14.890 nvme0n1 00:17:14.890 16:28:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:14.890 16:28:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:14.890 16:28:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:14.890 16:28:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:14.890 16:28:48 -- common/autotest_common.sh@10 -- # set +x 00:17:14.890 16:28:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:14.890 16:28:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.890 16:28:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:14.890 16:28:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:14.890 16:28:48 -- common/autotest_common.sh@10 -- # set +x 00:17:14.890 16:28:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:14.890 16:28:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:14.890 16:28:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:17:14.890 16:28:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:14.890 16:28:48 -- host/auth.sh@44 -- # digest=sha512 00:17:14.890 16:28:48 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:14.890 16:28:48 -- host/auth.sh@44 -- # keyid=2 00:17:14.890 16:28:48 -- host/auth.sh@45 -- # key=DHHC-1:01:YzRlYmMwN2Q5ZDY4NWFkZjEwYzY3MjU0ZTRjOGZiZTjeXngU: 00:17:14.890 16:28:48 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:14.890 16:28:48 -- host/auth.sh@48 -- # echo ffdhe2048 00:17:14.890 16:28:48 -- host/auth.sh@49 -- # echo DHHC-1:01:YzRlYmMwN2Q5ZDY4NWFkZjEwYzY3MjU0ZTRjOGZiZTjeXngU: 00:17:14.890 16:28:48 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:17:14.890 16:28:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:14.890 16:28:48 -- host/auth.sh@68 -- # digest=sha512 00:17:14.890 16:28:48 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:17:14.890 16:28:48 -- host/auth.sh@68 -- # keyid=2 00:17:14.890 16:28:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:14.890 16:28:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:14.890 16:28:48 -- common/autotest_common.sh@10 -- # set +x 00:17:14.890 16:28:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:14.890 16:28:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:14.890 16:28:48 -- nvmf/common.sh@717 -- # local ip 00:17:14.890 16:28:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:14.890 16:28:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:14.890 16:28:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:14.890 16:28:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:14.890 16:28:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:14.890 16:28:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:14.890 16:28:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:14.890 16:28:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:14.890 16:28:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:14.890 16:28:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:14.890 16:28:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:14.890 16:28:48 -- common/autotest_common.sh@10 -- # set +x 00:17:15.148 nvme0n1 00:17:15.149 16:28:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:15.149 16:28:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.149 16:28:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:15.149 16:28:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:15.149 16:28:49 -- common/autotest_common.sh@10 -- # set +x 00:17:15.149 16:28:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:15.149 16:28:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.149 16:28:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.149 16:28:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:15.149 16:28:49 -- common/autotest_common.sh@10 -- # set +x 00:17:15.149 16:28:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:15.149 16:28:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:15.149 16:28:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:17:15.149 16:28:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:15.149 16:28:49 -- host/auth.sh@44 -- # digest=sha512 00:17:15.149 16:28:49 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:15.149 16:28:49 -- host/auth.sh@44 -- # keyid=3 00:17:15.149 16:28:49 -- host/auth.sh@45 -- # key=DHHC-1:02:MTZiMGMwNmYwM2NkYjNhOGEzNmI4MTc2Mjg5NTdhNjUzNWZkYTFjN2NiOTM1OWZmpX7ncA==: 00:17:15.149 16:28:49 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:15.149 16:28:49 -- host/auth.sh@48 -- # echo ffdhe2048 00:17:15.149 16:28:49 -- host/auth.sh@49 -- # echo DHHC-1:02:MTZiMGMwNmYwM2NkYjNhOGEzNmI4MTc2Mjg5NTdhNjUzNWZkYTFjN2NiOTM1OWZmpX7ncA==: 00:17:15.149 16:28:49 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:17:15.149 16:28:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:15.149 16:28:49 -- host/auth.sh@68 -- # digest=sha512 00:17:15.149 16:28:49 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:17:15.149 16:28:49 -- host/auth.sh@68 -- # keyid=3 00:17:15.149 16:28:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:15.149 16:28:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:15.149 16:28:49 -- common/autotest_common.sh@10 -- # set +x 00:17:15.149 16:28:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:15.149 16:28:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:15.149 16:28:49 -- nvmf/common.sh@717 -- # local ip 00:17:15.149 16:28:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:15.149 16:28:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:15.149 16:28:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.149 16:28:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.149 16:28:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:15.149 16:28:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.149 16:28:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:15.149 16:28:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:15.149 16:28:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:15.149 16:28:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:17:15.149 16:28:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:15.149 16:28:49 -- common/autotest_common.sh@10 -- # set +x 00:17:15.407 nvme0n1 00:17:15.407 16:28:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:15.407 16:28:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.407 16:28:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:15.407 16:28:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:15.407 16:28:49 -- common/autotest_common.sh@10 -- # set +x 00:17:15.407 16:28:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:15.407 16:28:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.407 16:28:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.407 16:28:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:15.407 16:28:49 -- common/autotest_common.sh@10 -- # set +x 00:17:15.407 16:28:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:15.407 16:28:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:15.407 16:28:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:17:15.407 16:28:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:15.407 16:28:49 -- host/auth.sh@44 -- # digest=sha512 00:17:15.407 16:28:49 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:15.407 16:28:49 -- host/auth.sh@44 -- # keyid=4 00:17:15.407 16:28:49 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjBjOWQwYjQ2OTNmNDk0Zjg3MzRiMDQ1MWNkYzY2ODY5NDliYmJkNWM1NjQxNjBkZjQyNmY0MjdkZTQyMjFlM6Od0wk=: 00:17:15.407 16:28:49 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:15.407 16:28:49 -- host/auth.sh@48 -- # echo ffdhe2048 00:17:15.407 16:28:49 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjBjOWQwYjQ2OTNmNDk0Zjg3MzRiMDQ1MWNkYzY2ODY5NDliYmJkNWM1NjQxNjBkZjQyNmY0MjdkZTQyMjFlM6Od0wk=: 00:17:15.407 16:28:49 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:17:15.407 16:28:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:15.407 16:28:49 -- host/auth.sh@68 -- # digest=sha512 00:17:15.407 16:28:49 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:17:15.407 16:28:49 -- host/auth.sh@68 -- # keyid=4 00:17:15.407 16:28:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:15.407 16:28:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:15.407 16:28:49 -- common/autotest_common.sh@10 -- # set +x 00:17:15.407 16:28:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:15.407 16:28:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:15.407 16:28:49 -- nvmf/common.sh@717 -- # local ip 00:17:15.407 16:28:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:15.407 16:28:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:15.407 16:28:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.407 16:28:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.407 16:28:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:15.407 16:28:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.407 16:28:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:15.407 16:28:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:15.407 16:28:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:15.407 16:28:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:15.407 16:28:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:15.407 16:28:49 -- common/autotest_common.sh@10 -- # set +x 00:17:15.407 nvme0n1 00:17:15.407 16:28:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:15.407 16:28:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:15.407 16:28:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.407 16:28:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:15.407 16:28:49 -- common/autotest_common.sh@10 -- # set +x 00:17:15.407 16:28:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:15.407 16:28:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.407 16:28:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.407 16:28:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:15.407 16:28:49 -- common/autotest_common.sh@10 -- # set +x 00:17:15.407 16:28:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:15.408 16:28:49 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:17:15.408 16:28:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:15.408 16:28:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:17:15.408 16:28:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:15.408 16:28:49 -- host/auth.sh@44 -- # digest=sha512 00:17:15.408 16:28:49 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:15.408 16:28:49 -- host/auth.sh@44 -- # keyid=0 00:17:15.408 16:28:49 -- host/auth.sh@45 -- # key=DHHC-1:00:MDcwMmI4YTM0OWE3OTA4OTg2MjhkZGUwNGU0MDNkYWMHRk0g: 00:17:15.408 16:28:49 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:15.408 16:28:49 -- host/auth.sh@48 -- # echo ffdhe3072 00:17:15.408 16:28:49 -- host/auth.sh@49 -- # echo DHHC-1:00:MDcwMmI4YTM0OWE3OTA4OTg2MjhkZGUwNGU0MDNkYWMHRk0g: 00:17:15.408 16:28:49 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:17:15.408 16:28:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:15.408 16:28:49 -- host/auth.sh@68 -- # digest=sha512 00:17:15.408 16:28:49 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:17:15.408 16:28:49 -- host/auth.sh@68 -- # keyid=0 00:17:15.408 16:28:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:15.408 16:28:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:15.408 16:28:49 -- common/autotest_common.sh@10 -- # set +x 00:17:15.666 16:28:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:15.666 16:28:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:15.666 16:28:49 -- nvmf/common.sh@717 -- # local ip 00:17:15.666 16:28:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:15.666 16:28:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:15.666 16:28:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.666 16:28:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.666 16:28:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:15.666 16:28:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.666 16:28:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:15.666 16:28:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:15.666 16:28:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:15.666 16:28:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:17:15.666 16:28:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:15.666 16:28:49 -- common/autotest_common.sh@10 -- # set +x 00:17:15.666 nvme0n1 00:17:15.666 16:28:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:15.666 16:28:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.666 16:28:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:15.666 16:28:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:15.666 16:28:49 -- common/autotest_common.sh@10 -- # set +x 00:17:15.666 16:28:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:15.666 16:28:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.666 16:28:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.666 16:28:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:15.666 16:28:49 -- common/autotest_common.sh@10 -- # set +x 00:17:15.666 16:28:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:15.666 16:28:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:15.666 16:28:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:17:15.666 16:28:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:15.666 16:28:49 -- host/auth.sh@44 -- # digest=sha512 00:17:15.666 16:28:49 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:15.666 16:28:49 -- host/auth.sh@44 -- # keyid=1 00:17:15.666 16:28:49 -- host/auth.sh@45 -- # key=DHHC-1:00:YzAxZWRkOWRmYWMxN2Q4ZjRjYTRhNjViMTYwNDk3MmZkMjY0YjUzNTVlNzNkNmRhPQxEjg==: 00:17:15.666 16:28:49 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:15.666 16:28:49 -- host/auth.sh@48 -- # echo ffdhe3072 00:17:15.666 16:28:49 -- host/auth.sh@49 -- # echo DHHC-1:00:YzAxZWRkOWRmYWMxN2Q4ZjRjYTRhNjViMTYwNDk3MmZkMjY0YjUzNTVlNzNkNmRhPQxEjg==: 00:17:15.666 16:28:49 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:17:15.666 16:28:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:15.666 16:28:49 -- host/auth.sh@68 -- # digest=sha512 00:17:15.666 16:28:49 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:17:15.666 16:28:49 -- host/auth.sh@68 -- # keyid=1 00:17:15.666 16:28:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:15.666 16:28:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:15.666 16:28:49 -- common/autotest_common.sh@10 -- # set +x 00:17:15.666 16:28:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:15.666 16:28:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:15.666 16:28:49 -- nvmf/common.sh@717 -- # local ip 00:17:15.666 16:28:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:15.666 16:28:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:15.666 16:28:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.666 16:28:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.666 16:28:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:15.666 16:28:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.666 16:28:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:15.666 16:28:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:15.666 16:28:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:15.666 16:28:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:17:15.666 16:28:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:15.666 16:28:49 -- common/autotest_common.sh@10 -- # set +x 00:17:15.924 nvme0n1 00:17:15.924 16:28:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:15.924 16:28:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.924 16:28:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:15.924 16:28:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:15.924 16:28:49 -- common/autotest_common.sh@10 -- # set +x 00:17:15.924 16:28:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:15.924 16:28:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.924 16:28:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.924 16:28:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:15.924 16:28:49 -- common/autotest_common.sh@10 -- # set +x 00:17:15.924 16:28:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:15.924 16:28:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:15.924 16:28:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:17:15.924 16:28:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:15.924 16:28:49 -- host/auth.sh@44 -- # digest=sha512 00:17:15.924 16:28:49 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:15.924 16:28:49 -- host/auth.sh@44 -- # keyid=2 00:17:15.924 16:28:49 -- host/auth.sh@45 -- # key=DHHC-1:01:YzRlYmMwN2Q5ZDY4NWFkZjEwYzY3MjU0ZTRjOGZiZTjeXngU: 00:17:15.924 16:28:49 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:15.924 16:28:49 -- host/auth.sh@48 -- # echo ffdhe3072 00:17:15.924 16:28:49 -- host/auth.sh@49 -- # echo DHHC-1:01:YzRlYmMwN2Q5ZDY4NWFkZjEwYzY3MjU0ZTRjOGZiZTjeXngU: 00:17:15.924 16:28:49 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:17:15.924 16:28:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:15.924 16:28:49 -- host/auth.sh@68 -- # digest=sha512 00:17:15.924 16:28:49 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:17:15.924 16:28:49 -- host/auth.sh@68 -- # keyid=2 00:17:15.924 16:28:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:15.924 16:28:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:15.924 16:28:49 -- common/autotest_common.sh@10 -- # set +x 00:17:15.924 16:28:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:15.924 16:28:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:15.924 16:28:49 -- nvmf/common.sh@717 -- # local ip 00:17:15.924 16:28:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:15.924 16:28:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:15.924 16:28:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.924 16:28:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.924 16:28:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:15.924 16:28:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.924 16:28:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:15.924 16:28:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:15.924 16:28:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:15.924 16:28:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:15.924 16:28:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:15.924 16:28:49 -- common/autotest_common.sh@10 -- # set +x 00:17:16.182 nvme0n1 00:17:16.182 16:28:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.182 16:28:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.182 16:28:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:16.182 16:28:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.182 16:28:49 -- common/autotest_common.sh@10 -- # set +x 00:17:16.182 16:28:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.182 16:28:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.182 16:28:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.182 16:28:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.182 16:28:50 -- common/autotest_common.sh@10 -- # set +x 00:17:16.182 16:28:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.182 16:28:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:16.182 16:28:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:17:16.182 16:28:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:16.182 16:28:50 -- host/auth.sh@44 -- # digest=sha512 00:17:16.182 16:28:50 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:16.182 16:28:50 -- host/auth.sh@44 -- # keyid=3 00:17:16.182 16:28:50 -- host/auth.sh@45 -- # key=DHHC-1:02:MTZiMGMwNmYwM2NkYjNhOGEzNmI4MTc2Mjg5NTdhNjUzNWZkYTFjN2NiOTM1OWZmpX7ncA==: 00:17:16.182 16:28:50 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:16.182 16:28:50 -- host/auth.sh@48 -- # echo ffdhe3072 00:17:16.182 16:28:50 -- host/auth.sh@49 -- # echo DHHC-1:02:MTZiMGMwNmYwM2NkYjNhOGEzNmI4MTc2Mjg5NTdhNjUzNWZkYTFjN2NiOTM1OWZmpX7ncA==: 00:17:16.182 16:28:50 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:17:16.182 16:28:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:16.182 16:28:50 -- host/auth.sh@68 -- # digest=sha512 00:17:16.182 16:28:50 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:17:16.182 16:28:50 -- host/auth.sh@68 -- # keyid=3 00:17:16.182 16:28:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:16.182 16:28:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.182 16:28:50 -- common/autotest_common.sh@10 -- # set +x 00:17:16.182 16:28:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.182 16:28:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:16.182 16:28:50 -- nvmf/common.sh@717 -- # local ip 00:17:16.182 16:28:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:16.182 16:28:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:16.182 16:28:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.182 16:28:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.182 16:28:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:16.182 16:28:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.182 16:28:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:16.182 16:28:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:16.182 16:28:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:16.182 16:28:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:17:16.182 16:28:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.182 16:28:50 -- common/autotest_common.sh@10 -- # set +x 00:17:16.182 nvme0n1 00:17:16.182 16:28:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.182 16:28:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.182 16:28:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.182 16:28:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:16.182 16:28:50 -- common/autotest_common.sh@10 -- # set +x 00:17:16.182 16:28:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.441 16:28:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.441 16:28:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.441 16:28:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.441 16:28:50 -- common/autotest_common.sh@10 -- # set +x 00:17:16.441 16:28:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.441 16:28:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:16.441 16:28:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:17:16.441 16:28:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:16.441 16:28:50 -- host/auth.sh@44 -- # digest=sha512 00:17:16.441 16:28:50 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:16.441 16:28:50 -- host/auth.sh@44 -- # keyid=4 00:17:16.441 16:28:50 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjBjOWQwYjQ2OTNmNDk0Zjg3MzRiMDQ1MWNkYzY2ODY5NDliYmJkNWM1NjQxNjBkZjQyNmY0MjdkZTQyMjFlM6Od0wk=: 00:17:16.441 16:28:50 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:16.441 16:28:50 -- host/auth.sh@48 -- # echo ffdhe3072 00:17:16.441 16:28:50 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjBjOWQwYjQ2OTNmNDk0Zjg3MzRiMDQ1MWNkYzY2ODY5NDliYmJkNWM1NjQxNjBkZjQyNmY0MjdkZTQyMjFlM6Od0wk=: 00:17:16.441 16:28:50 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:17:16.441 16:28:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:16.441 16:28:50 -- host/auth.sh@68 -- # digest=sha512 00:17:16.441 16:28:50 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:17:16.441 16:28:50 -- host/auth.sh@68 -- # keyid=4 00:17:16.441 16:28:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:16.441 16:28:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.441 16:28:50 -- common/autotest_common.sh@10 -- # set +x 00:17:16.441 16:28:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.441 16:28:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:16.441 16:28:50 -- nvmf/common.sh@717 -- # local ip 00:17:16.441 16:28:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:16.441 16:28:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:16.441 16:28:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.441 16:28:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.441 16:28:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:16.441 16:28:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.441 16:28:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:16.441 16:28:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:16.441 16:28:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:16.441 16:28:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:16.441 16:28:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.441 16:28:50 -- common/autotest_common.sh@10 -- # set +x 00:17:16.441 nvme0n1 00:17:16.441 16:28:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.441 16:28:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.441 16:28:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.441 16:28:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:16.441 16:28:50 -- common/autotest_common.sh@10 -- # set +x 00:17:16.441 16:28:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.441 16:28:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.441 16:28:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.441 16:28:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.441 16:28:50 -- common/autotest_common.sh@10 -- # set +x 00:17:16.441 16:28:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.441 16:28:50 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:17:16.441 16:28:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:16.441 16:28:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:17:16.441 16:28:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:16.441 16:28:50 -- host/auth.sh@44 -- # digest=sha512 00:17:16.441 16:28:50 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:16.441 16:28:50 -- host/auth.sh@44 -- # keyid=0 00:17:16.441 16:28:50 -- host/auth.sh@45 -- # key=DHHC-1:00:MDcwMmI4YTM0OWE3OTA4OTg2MjhkZGUwNGU0MDNkYWMHRk0g: 00:17:16.441 16:28:50 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:16.441 16:28:50 -- host/auth.sh@48 -- # echo ffdhe4096 00:17:16.441 16:28:50 -- host/auth.sh@49 -- # echo DHHC-1:00:MDcwMmI4YTM0OWE3OTA4OTg2MjhkZGUwNGU0MDNkYWMHRk0g: 00:17:16.441 16:28:50 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:17:16.441 16:28:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:16.441 16:28:50 -- host/auth.sh@68 -- # digest=sha512 00:17:16.441 16:28:50 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:17:16.441 16:28:50 -- host/auth.sh@68 -- # keyid=0 00:17:16.441 16:28:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:16.441 16:28:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.441 16:28:50 -- common/autotest_common.sh@10 -- # set +x 00:17:16.699 16:28:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.699 16:28:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:16.699 16:28:50 -- nvmf/common.sh@717 -- # local ip 00:17:16.699 16:28:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:16.699 16:28:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:16.699 16:28:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.699 16:28:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.699 16:28:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:16.699 16:28:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.699 16:28:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:16.699 16:28:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:16.699 16:28:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:16.699 16:28:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:17:16.699 16:28:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.699 16:28:50 -- common/autotest_common.sh@10 -- # set +x 00:17:16.699 nvme0n1 00:17:16.699 16:28:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.699 16:28:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.699 16:28:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:16.699 16:28:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.699 16:28:50 -- common/autotest_common.sh@10 -- # set +x 00:17:16.699 16:28:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.957 16:28:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.957 16:28:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.957 16:28:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.957 16:28:50 -- common/autotest_common.sh@10 -- # set +x 00:17:16.957 16:28:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.957 16:28:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:16.957 16:28:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:17:16.957 16:28:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:16.957 16:28:50 -- host/auth.sh@44 -- # digest=sha512 00:17:16.957 16:28:50 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:16.957 16:28:50 -- host/auth.sh@44 -- # keyid=1 00:17:16.957 16:28:50 -- host/auth.sh@45 -- # key=DHHC-1:00:YzAxZWRkOWRmYWMxN2Q4ZjRjYTRhNjViMTYwNDk3MmZkMjY0YjUzNTVlNzNkNmRhPQxEjg==: 00:17:16.957 16:28:50 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:16.957 16:28:50 -- host/auth.sh@48 -- # echo ffdhe4096 00:17:16.957 16:28:50 -- host/auth.sh@49 -- # echo DHHC-1:00:YzAxZWRkOWRmYWMxN2Q4ZjRjYTRhNjViMTYwNDk3MmZkMjY0YjUzNTVlNzNkNmRhPQxEjg==: 00:17:16.957 16:28:50 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:17:16.957 16:28:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:16.957 16:28:50 -- host/auth.sh@68 -- # digest=sha512 00:17:16.957 16:28:50 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:17:16.957 16:28:50 -- host/auth.sh@68 -- # keyid=1 00:17:16.957 16:28:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:16.957 16:28:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.957 16:28:50 -- common/autotest_common.sh@10 -- # set +x 00:17:16.957 16:28:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.957 16:28:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:16.957 16:28:50 -- nvmf/common.sh@717 -- # local ip 00:17:16.957 16:28:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:16.957 16:28:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:16.957 16:28:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.957 16:28:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.957 16:28:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:16.957 16:28:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.957 16:28:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:16.957 16:28:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:16.957 16:28:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:16.957 16:28:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:17:16.957 16:28:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.957 16:28:50 -- common/autotest_common.sh@10 -- # set +x 00:17:16.957 nvme0n1 00:17:16.957 16:28:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.957 16:28:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.957 16:28:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:16.957 16:28:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.957 16:28:50 -- common/autotest_common.sh@10 -- # set +x 00:17:16.957 16:28:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.215 16:28:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.215 16:28:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.215 16:28:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.215 16:28:51 -- common/autotest_common.sh@10 -- # set +x 00:17:17.215 16:28:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.215 16:28:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:17.215 16:28:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:17:17.215 16:28:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:17.215 16:28:51 -- host/auth.sh@44 -- # digest=sha512 00:17:17.215 16:28:51 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:17.215 16:28:51 -- host/auth.sh@44 -- # keyid=2 00:17:17.215 16:28:51 -- host/auth.sh@45 -- # key=DHHC-1:01:YzRlYmMwN2Q5ZDY4NWFkZjEwYzY3MjU0ZTRjOGZiZTjeXngU: 00:17:17.215 16:28:51 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:17.215 16:28:51 -- host/auth.sh@48 -- # echo ffdhe4096 00:17:17.215 16:28:51 -- host/auth.sh@49 -- # echo DHHC-1:01:YzRlYmMwN2Q5ZDY4NWFkZjEwYzY3MjU0ZTRjOGZiZTjeXngU: 00:17:17.215 16:28:51 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:17:17.215 16:28:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:17.215 16:28:51 -- host/auth.sh@68 -- # digest=sha512 00:17:17.215 16:28:51 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:17:17.215 16:28:51 -- host/auth.sh@68 -- # keyid=2 00:17:17.215 16:28:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:17.215 16:28:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.215 16:28:51 -- common/autotest_common.sh@10 -- # set +x 00:17:17.215 16:28:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.215 16:28:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:17.215 16:28:51 -- nvmf/common.sh@717 -- # local ip 00:17:17.215 16:28:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:17.215 16:28:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:17.215 16:28:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.215 16:28:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.215 16:28:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:17.215 16:28:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.215 16:28:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:17.215 16:28:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:17.215 16:28:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:17.215 16:28:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:17.215 16:28:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.215 16:28:51 -- common/autotest_common.sh@10 -- # set +x 00:17:17.474 nvme0n1 00:17:17.474 16:28:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.474 16:28:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.474 16:28:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:17.474 16:28:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.474 16:28:51 -- common/autotest_common.sh@10 -- # set +x 00:17:17.474 16:28:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.474 16:28:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.474 16:28:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.474 16:28:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.474 16:28:51 -- common/autotest_common.sh@10 -- # set +x 00:17:17.474 16:28:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.474 16:28:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:17.474 16:28:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:17:17.474 16:28:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:17.474 16:28:51 -- host/auth.sh@44 -- # digest=sha512 00:17:17.474 16:28:51 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:17.474 16:28:51 -- host/auth.sh@44 -- # keyid=3 00:17:17.474 16:28:51 -- host/auth.sh@45 -- # key=DHHC-1:02:MTZiMGMwNmYwM2NkYjNhOGEzNmI4MTc2Mjg5NTdhNjUzNWZkYTFjN2NiOTM1OWZmpX7ncA==: 00:17:17.474 16:28:51 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:17.474 16:28:51 -- host/auth.sh@48 -- # echo ffdhe4096 00:17:17.474 16:28:51 -- host/auth.sh@49 -- # echo DHHC-1:02:MTZiMGMwNmYwM2NkYjNhOGEzNmI4MTc2Mjg5NTdhNjUzNWZkYTFjN2NiOTM1OWZmpX7ncA==: 00:17:17.474 16:28:51 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:17:17.474 16:28:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:17.474 16:28:51 -- host/auth.sh@68 -- # digest=sha512 00:17:17.474 16:28:51 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:17:17.474 16:28:51 -- host/auth.sh@68 -- # keyid=3 00:17:17.474 16:28:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:17.474 16:28:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.474 16:28:51 -- common/autotest_common.sh@10 -- # set +x 00:17:17.474 16:28:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.474 16:28:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:17.474 16:28:51 -- nvmf/common.sh@717 -- # local ip 00:17:17.474 16:28:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:17.474 16:28:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:17.474 16:28:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.474 16:28:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.474 16:28:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:17.474 16:28:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.474 16:28:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:17.474 16:28:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:17.474 16:28:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:17.474 16:28:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:17:17.474 16:28:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.474 16:28:51 -- common/autotest_common.sh@10 -- # set +x 00:17:17.732 nvme0n1 00:17:17.732 16:28:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.732 16:28:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.732 16:28:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.732 16:28:51 -- common/autotest_common.sh@10 -- # set +x 00:17:17.732 16:28:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:17.732 16:28:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.732 16:28:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.732 16:28:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.732 16:28:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.732 16:28:51 -- common/autotest_common.sh@10 -- # set +x 00:17:17.732 16:28:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.732 16:28:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:17.732 16:28:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:17:17.732 16:28:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:17.732 16:28:51 -- host/auth.sh@44 -- # digest=sha512 00:17:17.732 16:28:51 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:17.732 16:28:51 -- host/auth.sh@44 -- # keyid=4 00:17:17.732 16:28:51 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjBjOWQwYjQ2OTNmNDk0Zjg3MzRiMDQ1MWNkYzY2ODY5NDliYmJkNWM1NjQxNjBkZjQyNmY0MjdkZTQyMjFlM6Od0wk=: 00:17:17.732 16:28:51 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:17.732 16:28:51 -- host/auth.sh@48 -- # echo ffdhe4096 00:17:17.732 16:28:51 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjBjOWQwYjQ2OTNmNDk0Zjg3MzRiMDQ1MWNkYzY2ODY5NDliYmJkNWM1NjQxNjBkZjQyNmY0MjdkZTQyMjFlM6Od0wk=: 00:17:17.732 16:28:51 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:17:17.732 16:28:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:17.732 16:28:51 -- host/auth.sh@68 -- # digest=sha512 00:17:17.732 16:28:51 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:17:17.732 16:28:51 -- host/auth.sh@68 -- # keyid=4 00:17:17.732 16:28:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:17.732 16:28:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.732 16:28:51 -- common/autotest_common.sh@10 -- # set +x 00:17:17.732 16:28:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.732 16:28:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:17.732 16:28:51 -- nvmf/common.sh@717 -- # local ip 00:17:17.732 16:28:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:17.732 16:28:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:17.732 16:28:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.732 16:28:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.732 16:28:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:17.732 16:28:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.732 16:28:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:17.732 16:28:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:17.732 16:28:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:17.732 16:28:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:17.732 16:28:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.732 16:28:51 -- common/autotest_common.sh@10 -- # set +x 00:17:17.990 nvme0n1 00:17:17.990 16:28:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.990 16:28:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.990 16:28:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:17.990 16:28:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.990 16:28:51 -- common/autotest_common.sh@10 -- # set +x 00:17:17.990 16:28:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.990 16:28:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.990 16:28:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.990 16:28:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.990 16:28:51 -- common/autotest_common.sh@10 -- # set +x 00:17:17.990 16:28:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.990 16:28:51 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:17:17.990 16:28:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:17.990 16:28:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:17:17.990 16:28:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:17.990 16:28:51 -- host/auth.sh@44 -- # digest=sha512 00:17:17.990 16:28:51 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:17.990 16:28:51 -- host/auth.sh@44 -- # keyid=0 00:17:17.990 16:28:51 -- host/auth.sh@45 -- # key=DHHC-1:00:MDcwMmI4YTM0OWE3OTA4OTg2MjhkZGUwNGU0MDNkYWMHRk0g: 00:17:17.990 16:28:51 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:17.990 16:28:51 -- host/auth.sh@48 -- # echo ffdhe6144 00:17:17.990 16:28:51 -- host/auth.sh@49 -- # echo DHHC-1:00:MDcwMmI4YTM0OWE3OTA4OTg2MjhkZGUwNGU0MDNkYWMHRk0g: 00:17:17.990 16:28:51 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:17:17.990 16:28:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:17.990 16:28:51 -- host/auth.sh@68 -- # digest=sha512 00:17:17.990 16:28:51 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:17:17.990 16:28:51 -- host/auth.sh@68 -- # keyid=0 00:17:17.990 16:28:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:17.990 16:28:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.990 16:28:51 -- common/autotest_common.sh@10 -- # set +x 00:17:17.990 16:28:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.990 16:28:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:17.990 16:28:51 -- nvmf/common.sh@717 -- # local ip 00:17:17.990 16:28:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:17.990 16:28:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:17.990 16:28:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.990 16:28:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.990 16:28:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:17.990 16:28:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.990 16:28:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:17.990 16:28:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:17.990 16:28:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:17.990 16:28:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:17:17.990 16:28:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.990 16:28:51 -- common/autotest_common.sh@10 -- # set +x 00:17:18.248 nvme0n1 00:17:18.248 16:28:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:18.248 16:28:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:18.248 16:28:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:18.248 16:28:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:18.248 16:28:52 -- common/autotest_common.sh@10 -- # set +x 00:17:18.248 16:28:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:18.506 16:28:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.506 16:28:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:18.506 16:28:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:18.506 16:28:52 -- common/autotest_common.sh@10 -- # set +x 00:17:18.506 16:28:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:18.506 16:28:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:18.506 16:28:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:17:18.506 16:28:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:18.506 16:28:52 -- host/auth.sh@44 -- # digest=sha512 00:17:18.506 16:28:52 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:18.506 16:28:52 -- host/auth.sh@44 -- # keyid=1 00:17:18.506 16:28:52 -- host/auth.sh@45 -- # key=DHHC-1:00:YzAxZWRkOWRmYWMxN2Q4ZjRjYTRhNjViMTYwNDk3MmZkMjY0YjUzNTVlNzNkNmRhPQxEjg==: 00:17:18.506 16:28:52 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:18.506 16:28:52 -- host/auth.sh@48 -- # echo ffdhe6144 00:17:18.506 16:28:52 -- host/auth.sh@49 -- # echo DHHC-1:00:YzAxZWRkOWRmYWMxN2Q4ZjRjYTRhNjViMTYwNDk3MmZkMjY0YjUzNTVlNzNkNmRhPQxEjg==: 00:17:18.506 16:28:52 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:17:18.506 16:28:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:18.506 16:28:52 -- host/auth.sh@68 -- # digest=sha512 00:17:18.506 16:28:52 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:17:18.506 16:28:52 -- host/auth.sh@68 -- # keyid=1 00:17:18.506 16:28:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:18.506 16:28:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:18.506 16:28:52 -- common/autotest_common.sh@10 -- # set +x 00:17:18.506 16:28:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:18.506 16:28:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:18.506 16:28:52 -- nvmf/common.sh@717 -- # local ip 00:17:18.506 16:28:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:18.506 16:28:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:18.506 16:28:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:18.506 16:28:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:18.506 16:28:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:18.506 16:28:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:18.506 16:28:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:18.506 16:28:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:18.506 16:28:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:18.506 16:28:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:17:18.506 16:28:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:18.506 16:28:52 -- common/autotest_common.sh@10 -- # set +x 00:17:18.764 nvme0n1 00:17:18.764 16:28:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:18.764 16:28:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:18.764 16:28:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:18.764 16:28:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:18.764 16:28:52 -- common/autotest_common.sh@10 -- # set +x 00:17:18.764 16:28:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:18.764 16:28:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.764 16:28:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:18.764 16:28:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:18.764 16:28:52 -- common/autotest_common.sh@10 -- # set +x 00:17:18.764 16:28:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:18.764 16:28:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:18.764 16:28:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:17:18.764 16:28:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:18.764 16:28:52 -- host/auth.sh@44 -- # digest=sha512 00:17:18.764 16:28:52 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:18.764 16:28:52 -- host/auth.sh@44 -- # keyid=2 00:17:18.764 16:28:52 -- host/auth.sh@45 -- # key=DHHC-1:01:YzRlYmMwN2Q5ZDY4NWFkZjEwYzY3MjU0ZTRjOGZiZTjeXngU: 00:17:18.764 16:28:52 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:18.764 16:28:52 -- host/auth.sh@48 -- # echo ffdhe6144 00:17:18.764 16:28:52 -- host/auth.sh@49 -- # echo DHHC-1:01:YzRlYmMwN2Q5ZDY4NWFkZjEwYzY3MjU0ZTRjOGZiZTjeXngU: 00:17:18.764 16:28:52 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:17:18.764 16:28:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:18.764 16:28:52 -- host/auth.sh@68 -- # digest=sha512 00:17:18.764 16:28:52 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:17:18.764 16:28:52 -- host/auth.sh@68 -- # keyid=2 00:17:18.764 16:28:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:18.764 16:28:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:18.764 16:28:52 -- common/autotest_common.sh@10 -- # set +x 00:17:18.764 16:28:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:18.764 16:28:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:18.764 16:28:52 -- nvmf/common.sh@717 -- # local ip 00:17:18.764 16:28:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:18.764 16:28:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:18.764 16:28:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:18.764 16:28:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:18.764 16:28:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:18.764 16:28:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:18.764 16:28:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:18.764 16:28:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:18.764 16:28:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:18.764 16:28:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:18.764 16:28:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:18.764 16:28:52 -- common/autotest_common.sh@10 -- # set +x 00:17:19.330 nvme0n1 00:17:19.330 16:28:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.330 16:28:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:19.330 16:28:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:19.330 16:28:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.330 16:28:53 -- common/autotest_common.sh@10 -- # set +x 00:17:19.330 16:28:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.330 16:28:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.330 16:28:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:19.330 16:28:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.330 16:28:53 -- common/autotest_common.sh@10 -- # set +x 00:17:19.330 16:28:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.330 16:28:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:19.330 16:28:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:17:19.330 16:28:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:19.330 16:28:53 -- host/auth.sh@44 -- # digest=sha512 00:17:19.330 16:28:53 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:19.330 16:28:53 -- host/auth.sh@44 -- # keyid=3 00:17:19.330 16:28:53 -- host/auth.sh@45 -- # key=DHHC-1:02:MTZiMGMwNmYwM2NkYjNhOGEzNmI4MTc2Mjg5NTdhNjUzNWZkYTFjN2NiOTM1OWZmpX7ncA==: 00:17:19.330 16:28:53 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:19.330 16:28:53 -- host/auth.sh@48 -- # echo ffdhe6144 00:17:19.330 16:28:53 -- host/auth.sh@49 -- # echo DHHC-1:02:MTZiMGMwNmYwM2NkYjNhOGEzNmI4MTc2Mjg5NTdhNjUzNWZkYTFjN2NiOTM1OWZmpX7ncA==: 00:17:19.330 16:28:53 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:17:19.330 16:28:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:19.330 16:28:53 -- host/auth.sh@68 -- # digest=sha512 00:17:19.330 16:28:53 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:17:19.330 16:28:53 -- host/auth.sh@68 -- # keyid=3 00:17:19.330 16:28:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:19.330 16:28:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.330 16:28:53 -- common/autotest_common.sh@10 -- # set +x 00:17:19.330 16:28:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.330 16:28:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:19.330 16:28:53 -- nvmf/common.sh@717 -- # local ip 00:17:19.330 16:28:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:19.330 16:28:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:19.330 16:28:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:19.330 16:28:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:19.330 16:28:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:19.330 16:28:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:19.330 16:28:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:19.330 16:28:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:19.330 16:28:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:19.330 16:28:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:17:19.330 16:28:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.330 16:28:53 -- common/autotest_common.sh@10 -- # set +x 00:17:19.588 nvme0n1 00:17:19.588 16:28:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.588 16:28:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:19.588 16:28:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:19.588 16:28:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.588 16:28:53 -- common/autotest_common.sh@10 -- # set +x 00:17:19.588 16:28:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.845 16:28:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.845 16:28:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:19.846 16:28:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.846 16:28:53 -- common/autotest_common.sh@10 -- # set +x 00:17:19.846 16:28:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.846 16:28:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:19.846 16:28:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:17:19.846 16:28:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:19.846 16:28:53 -- host/auth.sh@44 -- # digest=sha512 00:17:19.846 16:28:53 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:19.846 16:28:53 -- host/auth.sh@44 -- # keyid=4 00:17:19.846 16:28:53 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjBjOWQwYjQ2OTNmNDk0Zjg3MzRiMDQ1MWNkYzY2ODY5NDliYmJkNWM1NjQxNjBkZjQyNmY0MjdkZTQyMjFlM6Od0wk=: 00:17:19.846 16:28:53 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:19.846 16:28:53 -- host/auth.sh@48 -- # echo ffdhe6144 00:17:19.846 16:28:53 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjBjOWQwYjQ2OTNmNDk0Zjg3MzRiMDQ1MWNkYzY2ODY5NDliYmJkNWM1NjQxNjBkZjQyNmY0MjdkZTQyMjFlM6Od0wk=: 00:17:19.846 16:28:53 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:17:19.846 16:28:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:19.846 16:28:53 -- host/auth.sh@68 -- # digest=sha512 00:17:19.846 16:28:53 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:17:19.846 16:28:53 -- host/auth.sh@68 -- # keyid=4 00:17:19.846 16:28:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:19.846 16:28:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.846 16:28:53 -- common/autotest_common.sh@10 -- # set +x 00:17:19.846 16:28:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.846 16:28:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:19.846 16:28:53 -- nvmf/common.sh@717 -- # local ip 00:17:19.846 16:28:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:19.846 16:28:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:19.846 16:28:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:19.846 16:28:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:19.846 16:28:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:19.846 16:28:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:19.846 16:28:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:19.846 16:28:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:19.846 16:28:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:19.846 16:28:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:19.846 16:28:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.846 16:28:53 -- common/autotest_common.sh@10 -- # set +x 00:17:20.103 nvme0n1 00:17:20.103 16:28:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:20.103 16:28:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:20.103 16:28:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:20.103 16:28:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:20.103 16:28:54 -- common/autotest_common.sh@10 -- # set +x 00:17:20.103 16:28:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:20.103 16:28:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.103 16:28:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:20.103 16:28:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:20.103 16:28:54 -- common/autotest_common.sh@10 -- # set +x 00:17:20.103 16:28:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:20.103 16:28:54 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:17:20.103 16:28:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:20.103 16:28:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:17:20.103 16:28:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:20.103 16:28:54 -- host/auth.sh@44 -- # digest=sha512 00:17:20.103 16:28:54 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:20.103 16:28:54 -- host/auth.sh@44 -- # keyid=0 00:17:20.103 16:28:54 -- host/auth.sh@45 -- # key=DHHC-1:00:MDcwMmI4YTM0OWE3OTA4OTg2MjhkZGUwNGU0MDNkYWMHRk0g: 00:17:20.103 16:28:54 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:20.103 16:28:54 -- host/auth.sh@48 -- # echo ffdhe8192 00:17:20.103 16:28:54 -- host/auth.sh@49 -- # echo DHHC-1:00:MDcwMmI4YTM0OWE3OTA4OTg2MjhkZGUwNGU0MDNkYWMHRk0g: 00:17:20.103 16:28:54 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:17:20.103 16:28:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:20.103 16:28:54 -- host/auth.sh@68 -- # digest=sha512 00:17:20.103 16:28:54 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:17:20.103 16:28:54 -- host/auth.sh@68 -- # keyid=0 00:17:20.103 16:28:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:20.103 16:28:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:20.103 16:28:54 -- common/autotest_common.sh@10 -- # set +x 00:17:20.103 16:28:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:20.103 16:28:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:20.103 16:28:54 -- nvmf/common.sh@717 -- # local ip 00:17:20.103 16:28:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:20.103 16:28:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:20.103 16:28:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.103 16:28:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.103 16:28:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:20.103 16:28:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.103 16:28:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:20.103 16:28:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:20.103 16:28:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:20.103 16:28:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:17:20.103 16:28:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:20.103 16:28:54 -- common/autotest_common.sh@10 -- # set +x 00:17:20.672 nvme0n1 00:17:20.672 16:28:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:20.672 16:28:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:20.672 16:28:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:20.672 16:28:54 -- common/autotest_common.sh@10 -- # set +x 00:17:20.672 16:28:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:20.672 16:28:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:20.963 16:28:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.963 16:28:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:20.963 16:28:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:20.963 16:28:54 -- common/autotest_common.sh@10 -- # set +x 00:17:20.963 16:28:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:20.963 16:28:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:20.963 16:28:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:17:20.963 16:28:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:20.963 16:28:54 -- host/auth.sh@44 -- # digest=sha512 00:17:20.963 16:28:54 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:20.963 16:28:54 -- host/auth.sh@44 -- # keyid=1 00:17:20.963 16:28:54 -- host/auth.sh@45 -- # key=DHHC-1:00:YzAxZWRkOWRmYWMxN2Q4ZjRjYTRhNjViMTYwNDk3MmZkMjY0YjUzNTVlNzNkNmRhPQxEjg==: 00:17:20.963 16:28:54 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:20.963 16:28:54 -- host/auth.sh@48 -- # echo ffdhe8192 00:17:20.963 16:28:54 -- host/auth.sh@49 -- # echo DHHC-1:00:YzAxZWRkOWRmYWMxN2Q4ZjRjYTRhNjViMTYwNDk3MmZkMjY0YjUzNTVlNzNkNmRhPQxEjg==: 00:17:20.963 16:28:54 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:17:20.963 16:28:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:20.963 16:28:54 -- host/auth.sh@68 -- # digest=sha512 00:17:20.963 16:28:54 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:17:20.963 16:28:54 -- host/auth.sh@68 -- # keyid=1 00:17:20.963 16:28:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:20.963 16:28:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:20.963 16:28:54 -- common/autotest_common.sh@10 -- # set +x 00:17:20.963 16:28:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:20.963 16:28:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:20.963 16:28:54 -- nvmf/common.sh@717 -- # local ip 00:17:20.963 16:28:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:20.963 16:28:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:20.963 16:28:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.963 16:28:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.963 16:28:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:20.963 16:28:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.963 16:28:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:20.963 16:28:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:20.963 16:28:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:20.963 16:28:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:17:20.963 16:28:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:20.963 16:28:54 -- common/autotest_common.sh@10 -- # set +x 00:17:21.529 nvme0n1 00:17:21.529 16:28:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:21.529 16:28:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:21.529 16:28:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:21.529 16:28:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:21.529 16:28:55 -- common/autotest_common.sh@10 -- # set +x 00:17:21.529 16:28:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:21.529 16:28:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.529 16:28:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:21.529 16:28:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:21.529 16:28:55 -- common/autotest_common.sh@10 -- # set +x 00:17:21.529 16:28:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:21.529 16:28:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:21.529 16:28:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:17:21.529 16:28:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:21.529 16:28:55 -- host/auth.sh@44 -- # digest=sha512 00:17:21.529 16:28:55 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:21.529 16:28:55 -- host/auth.sh@44 -- # keyid=2 00:17:21.529 16:28:55 -- host/auth.sh@45 -- # key=DHHC-1:01:YzRlYmMwN2Q5ZDY4NWFkZjEwYzY3MjU0ZTRjOGZiZTjeXngU: 00:17:21.529 16:28:55 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:21.529 16:28:55 -- host/auth.sh@48 -- # echo ffdhe8192 00:17:21.529 16:28:55 -- host/auth.sh@49 -- # echo DHHC-1:01:YzRlYmMwN2Q5ZDY4NWFkZjEwYzY3MjU0ZTRjOGZiZTjeXngU: 00:17:21.529 16:28:55 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:17:21.529 16:28:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:21.529 16:28:55 -- host/auth.sh@68 -- # digest=sha512 00:17:21.529 16:28:55 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:17:21.529 16:28:55 -- host/auth.sh@68 -- # keyid=2 00:17:21.529 16:28:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:21.529 16:28:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:21.529 16:28:55 -- common/autotest_common.sh@10 -- # set +x 00:17:21.529 16:28:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:21.529 16:28:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:21.529 16:28:55 -- nvmf/common.sh@717 -- # local ip 00:17:21.529 16:28:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:21.529 16:28:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:21.529 16:28:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:21.529 16:28:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:21.529 16:28:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:21.529 16:28:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:21.529 16:28:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:21.529 16:28:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:21.529 16:28:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:21.529 16:28:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:21.529 16:28:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:21.529 16:28:55 -- common/autotest_common.sh@10 -- # set +x 00:17:22.093 nvme0n1 00:17:22.093 16:28:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:22.093 16:28:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:22.093 16:28:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:22.094 16:28:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:22.094 16:28:56 -- common/autotest_common.sh@10 -- # set +x 00:17:22.094 16:28:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:22.351 16:28:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.351 16:28:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:22.351 16:28:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:22.351 16:28:56 -- common/autotest_common.sh@10 -- # set +x 00:17:22.351 16:28:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:22.351 16:28:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:22.351 16:28:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:17:22.351 16:28:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:22.351 16:28:56 -- host/auth.sh@44 -- # digest=sha512 00:17:22.351 16:28:56 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:22.351 16:28:56 -- host/auth.sh@44 -- # keyid=3 00:17:22.351 16:28:56 -- host/auth.sh@45 -- # key=DHHC-1:02:MTZiMGMwNmYwM2NkYjNhOGEzNmI4MTc2Mjg5NTdhNjUzNWZkYTFjN2NiOTM1OWZmpX7ncA==: 00:17:22.351 16:28:56 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:22.351 16:28:56 -- host/auth.sh@48 -- # echo ffdhe8192 00:17:22.351 16:28:56 -- host/auth.sh@49 -- # echo DHHC-1:02:MTZiMGMwNmYwM2NkYjNhOGEzNmI4MTc2Mjg5NTdhNjUzNWZkYTFjN2NiOTM1OWZmpX7ncA==: 00:17:22.351 16:28:56 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:17:22.351 16:28:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:22.351 16:28:56 -- host/auth.sh@68 -- # digest=sha512 00:17:22.351 16:28:56 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:17:22.351 16:28:56 -- host/auth.sh@68 -- # keyid=3 00:17:22.351 16:28:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:22.351 16:28:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:22.351 16:28:56 -- common/autotest_common.sh@10 -- # set +x 00:17:22.351 16:28:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:22.351 16:28:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:22.351 16:28:56 -- nvmf/common.sh@717 -- # local ip 00:17:22.351 16:28:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:22.351 16:28:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:22.351 16:28:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:22.351 16:28:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:22.351 16:28:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:22.351 16:28:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:22.351 16:28:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:22.351 16:28:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:22.351 16:28:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:22.351 16:28:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:17:22.351 16:28:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:22.351 16:28:56 -- common/autotest_common.sh@10 -- # set +x 00:17:22.917 nvme0n1 00:17:22.917 16:28:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:22.917 16:28:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:22.917 16:28:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:22.917 16:28:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:22.917 16:28:56 -- common/autotest_common.sh@10 -- # set +x 00:17:22.917 16:28:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:22.917 16:28:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.917 16:28:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:22.917 16:28:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:22.917 16:28:56 -- common/autotest_common.sh@10 -- # set +x 00:17:22.917 16:28:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:22.917 16:28:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:22.917 16:28:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:17:22.917 16:28:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:22.917 16:28:56 -- host/auth.sh@44 -- # digest=sha512 00:17:22.917 16:28:56 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:22.917 16:28:56 -- host/auth.sh@44 -- # keyid=4 00:17:22.917 16:28:56 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjBjOWQwYjQ2OTNmNDk0Zjg3MzRiMDQ1MWNkYzY2ODY5NDliYmJkNWM1NjQxNjBkZjQyNmY0MjdkZTQyMjFlM6Od0wk=: 00:17:22.917 16:28:56 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:22.917 16:28:56 -- host/auth.sh@48 -- # echo ffdhe8192 00:17:22.917 16:28:56 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjBjOWQwYjQ2OTNmNDk0Zjg3MzRiMDQ1MWNkYzY2ODY5NDliYmJkNWM1NjQxNjBkZjQyNmY0MjdkZTQyMjFlM6Od0wk=: 00:17:22.917 16:28:56 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:17:22.917 16:28:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:22.917 16:28:56 -- host/auth.sh@68 -- # digest=sha512 00:17:22.917 16:28:56 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:17:22.917 16:28:56 -- host/auth.sh@68 -- # keyid=4 00:17:22.917 16:28:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:22.917 16:28:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:22.917 16:28:56 -- common/autotest_common.sh@10 -- # set +x 00:17:22.917 16:28:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:22.917 16:28:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:22.917 16:28:56 -- nvmf/common.sh@717 -- # local ip 00:17:22.917 16:28:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:22.917 16:28:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:22.917 16:28:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:22.917 16:28:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:22.917 16:28:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:22.917 16:28:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:22.917 16:28:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:22.917 16:28:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:22.917 16:28:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:22.917 16:28:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:22.917 16:28:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:22.917 16:28:56 -- common/autotest_common.sh@10 -- # set +x 00:17:23.483 nvme0n1 00:17:23.483 16:28:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.483 16:28:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:23.483 16:28:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.483 16:28:57 -- common/autotest_common.sh@10 -- # set +x 00:17:23.483 16:28:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:23.483 16:28:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.483 16:28:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.483 16:28:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:23.483 16:28:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.483 16:28:57 -- common/autotest_common.sh@10 -- # set +x 00:17:23.483 16:28:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.483 16:28:57 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:23.483 16:28:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:23.483 16:28:57 -- host/auth.sh@44 -- # digest=sha256 00:17:23.483 16:28:57 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:23.483 16:28:57 -- host/auth.sh@44 -- # keyid=1 00:17:23.483 16:28:57 -- host/auth.sh@45 -- # key=DHHC-1:00:YzAxZWRkOWRmYWMxN2Q4ZjRjYTRhNjViMTYwNDk3MmZkMjY0YjUzNTVlNzNkNmRhPQxEjg==: 00:17:23.483 16:28:57 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:17:23.483 16:28:57 -- host/auth.sh@48 -- # echo ffdhe2048 00:17:23.483 16:28:57 -- host/auth.sh@49 -- # echo DHHC-1:00:YzAxZWRkOWRmYWMxN2Q4ZjRjYTRhNjViMTYwNDk3MmZkMjY0YjUzNTVlNzNkNmRhPQxEjg==: 00:17:23.483 16:28:57 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:23.483 16:28:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.483 16:28:57 -- common/autotest_common.sh@10 -- # set +x 00:17:23.742 16:28:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.742 16:28:57 -- host/auth.sh@119 -- # get_main_ns_ip 00:17:23.742 16:28:57 -- nvmf/common.sh@717 -- # local ip 00:17:23.742 16:28:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:23.742 16:28:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:23.742 16:28:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:23.742 16:28:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:23.742 16:28:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:23.742 16:28:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:23.742 16:28:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:23.742 16:28:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:23.742 16:28:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:23.742 16:28:57 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:23.742 16:28:57 -- common/autotest_common.sh@638 -- # local es=0 00:17:23.742 16:28:57 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:23.742 16:28:57 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:17:23.742 16:28:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:23.742 16:28:57 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:17:23.742 16:28:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:23.742 16:28:57 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:23.742 16:28:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.742 16:28:57 -- common/autotest_common.sh@10 -- # set +x 00:17:23.742 2024/04/17 16:28:57 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:23.742 request: 00:17:23.742 { 00:17:23.742 "method": "bdev_nvme_attach_controller", 00:17:23.742 "params": { 00:17:23.742 "name": "nvme0", 00:17:23.742 "trtype": "tcp", 00:17:23.742 "traddr": "10.0.0.1", 00:17:23.742 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:23.742 "adrfam": "ipv4", 00:17:23.742 "trsvcid": "4420", 00:17:23.742 "subnqn": "nqn.2024-02.io.spdk:cnode0" 00:17:23.742 } 00:17:23.742 } 00:17:23.742 Got JSON-RPC error response 00:17:23.742 GoRPCClient: error on JSON-RPC call 00:17:23.742 16:28:57 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:17:23.742 16:28:57 -- common/autotest_common.sh@641 -- # es=1 00:17:23.742 16:28:57 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:23.742 16:28:57 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:23.742 16:28:57 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:23.742 16:28:57 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:17:23.742 16:28:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.742 16:28:57 -- host/auth.sh@121 -- # jq length 00:17:23.742 16:28:57 -- common/autotest_common.sh@10 -- # set +x 00:17:23.742 16:28:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.742 16:28:57 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:17:23.742 16:28:57 -- host/auth.sh@124 -- # get_main_ns_ip 00:17:23.742 16:28:57 -- nvmf/common.sh@717 -- # local ip 00:17:23.742 16:28:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:23.742 16:28:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:23.742 16:28:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:23.742 16:28:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:23.742 16:28:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:23.742 16:28:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:23.742 16:28:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:23.742 16:28:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:23.742 16:28:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:23.742 16:28:57 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:23.742 16:28:57 -- common/autotest_common.sh@638 -- # local es=0 00:17:23.742 16:28:57 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:23.742 16:28:57 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:17:23.742 16:28:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:23.742 16:28:57 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:17:23.742 16:28:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:23.742 16:28:57 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:23.742 16:28:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.742 16:28:57 -- common/autotest_common.sh@10 -- # set +x 00:17:23.742 2024/04/17 16:28:57 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_key:key2 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:23.742 request: 00:17:23.742 { 00:17:23.742 "method": "bdev_nvme_attach_controller", 00:17:23.742 "params": { 00:17:23.742 "name": "nvme0", 00:17:23.742 "trtype": "tcp", 00:17:23.742 "traddr": "10.0.0.1", 00:17:23.742 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:23.742 "adrfam": "ipv4", 00:17:23.742 "trsvcid": "4420", 00:17:23.742 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:23.742 "dhchap_key": "key2" 00:17:23.742 } 00:17:23.742 } 00:17:23.742 Got JSON-RPC error response 00:17:23.742 GoRPCClient: error on JSON-RPC call 00:17:23.742 16:28:57 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:17:23.742 16:28:57 -- common/autotest_common.sh@641 -- # es=1 00:17:23.742 16:28:57 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:23.742 16:28:57 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:23.742 16:28:57 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:23.742 16:28:57 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:17:23.742 16:28:57 -- host/auth.sh@127 -- # jq length 00:17:23.742 16:28:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.742 16:28:57 -- common/autotest_common.sh@10 -- # set +x 00:17:23.742 16:28:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.742 16:28:57 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:17:23.742 16:28:57 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:17:23.742 16:28:57 -- host/auth.sh@130 -- # cleanup 00:17:23.742 16:28:57 -- host/auth.sh@24 -- # nvmftestfini 00:17:23.742 16:28:57 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:23.742 16:28:57 -- nvmf/common.sh@117 -- # sync 00:17:23.742 16:28:57 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:23.742 16:28:57 -- nvmf/common.sh@120 -- # set +e 00:17:23.742 16:28:57 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:23.742 16:28:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:23.742 rmmod nvme_tcp 00:17:23.742 rmmod nvme_fabrics 00:17:23.742 16:28:57 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:24.001 16:28:57 -- nvmf/common.sh@124 -- # set -e 00:17:24.001 16:28:57 -- nvmf/common.sh@125 -- # return 0 00:17:24.001 16:28:57 -- nvmf/common.sh@478 -- # '[' -n 83684 ']' 00:17:24.001 16:28:57 -- nvmf/common.sh@479 -- # killprocess 83684 00:17:24.001 16:28:57 -- common/autotest_common.sh@936 -- # '[' -z 83684 ']' 00:17:24.001 16:28:57 -- common/autotest_common.sh@940 -- # kill -0 83684 00:17:24.001 16:28:57 -- common/autotest_common.sh@941 -- # uname 00:17:24.001 16:28:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:24.001 16:28:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83684 00:17:24.001 killing process with pid 83684 00:17:24.001 16:28:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:24.001 16:28:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:24.001 16:28:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83684' 00:17:24.001 16:28:57 -- common/autotest_common.sh@955 -- # kill 83684 00:17:24.001 16:28:57 -- common/autotest_common.sh@960 -- # wait 83684 00:17:24.260 16:28:58 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:24.260 16:28:58 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:24.260 16:28:58 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:24.260 16:28:58 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:24.260 16:28:58 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:24.260 16:28:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.260 16:28:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:24.260 16:28:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.260 16:28:58 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:24.260 16:28:58 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:24.260 16:28:58 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:24.260 16:28:58 -- host/auth.sh@27 -- # clean_kernel_target 00:17:24.260 16:28:58 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:17:24.260 16:28:58 -- nvmf/common.sh@675 -- # echo 0 00:17:24.260 16:28:58 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:24.260 16:28:58 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:24.260 16:28:58 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:24.260 16:28:58 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:24.260 16:28:58 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:17:24.260 16:28:58 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:17:24.260 16:28:58 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:24.825 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:25.083 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:25.083 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:25.083 16:28:58 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.mi0 /tmp/spdk.key-null.Y8L /tmp/spdk.key-sha256.L5c /tmp/spdk.key-sha384.247 /tmp/spdk.key-sha512.B5L /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:17:25.083 16:28:59 -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:25.341 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:25.341 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:25.341 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:25.341 ************************************ 00:17:25.341 END TEST nvmf_auth 00:17:25.341 ************************************ 00:17:25.341 00:17:25.341 real 0m39.220s 00:17:25.341 user 0m35.039s 00:17:25.341 sys 0m3.628s 00:17:25.341 16:28:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:25.341 16:28:59 -- common/autotest_common.sh@10 -- # set +x 00:17:25.599 16:28:59 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:17:25.599 16:28:59 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:25.599 16:28:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:25.599 16:28:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:25.599 16:28:59 -- common/autotest_common.sh@10 -- # set +x 00:17:25.599 ************************************ 00:17:25.599 START TEST nvmf_digest 00:17:25.599 ************************************ 00:17:25.599 16:28:59 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:25.599 * Looking for test storage... 00:17:25.599 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:25.599 16:28:59 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:25.599 16:28:59 -- nvmf/common.sh@7 -- # uname -s 00:17:25.599 16:28:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:25.599 16:28:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:25.599 16:28:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:25.599 16:28:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:25.599 16:28:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:25.599 16:28:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:25.599 16:28:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:25.599 16:28:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:25.599 16:28:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:25.599 16:28:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:25.599 16:28:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:17:25.599 16:28:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:17:25.599 16:28:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:25.599 16:28:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:25.599 16:28:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:25.599 16:28:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:25.599 16:28:59 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:25.599 16:28:59 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:25.599 16:28:59 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:25.599 16:28:59 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:25.599 16:28:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.599 16:28:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.599 16:28:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.599 16:28:59 -- paths/export.sh@5 -- # export PATH 00:17:25.599 16:28:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.599 16:28:59 -- nvmf/common.sh@47 -- # : 0 00:17:25.599 16:28:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:25.599 16:28:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:25.599 16:28:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:25.599 16:28:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:25.599 16:28:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:25.599 16:28:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:25.599 16:28:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:25.599 16:28:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:25.599 16:28:59 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:25.599 16:28:59 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:17:25.599 16:28:59 -- host/digest.sh@16 -- # runtime=2 00:17:25.599 16:28:59 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:17:25.599 16:28:59 -- host/digest.sh@138 -- # nvmftestinit 00:17:25.599 16:28:59 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:25.599 16:28:59 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:25.599 16:28:59 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:25.599 16:28:59 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:25.599 16:28:59 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:25.599 16:28:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.599 16:28:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:25.599 16:28:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.599 16:28:59 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:17:25.599 16:28:59 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:17:25.599 16:28:59 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:17:25.599 16:28:59 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:17:25.599 16:28:59 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:17:25.599 16:28:59 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:17:25.599 16:28:59 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:25.599 16:28:59 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:25.600 16:28:59 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:25.600 16:28:59 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:25.600 16:28:59 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:25.600 16:28:59 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:25.600 16:28:59 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:25.600 16:28:59 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:25.600 16:28:59 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:25.600 16:28:59 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:25.600 16:28:59 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:25.600 16:28:59 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:25.600 16:28:59 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:25.600 16:28:59 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:25.857 Cannot find device "nvmf_tgt_br" 00:17:25.857 16:28:59 -- nvmf/common.sh@155 -- # true 00:17:25.857 16:28:59 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:25.857 Cannot find device "nvmf_tgt_br2" 00:17:25.857 16:28:59 -- nvmf/common.sh@156 -- # true 00:17:25.857 16:28:59 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:25.857 16:28:59 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:25.857 Cannot find device "nvmf_tgt_br" 00:17:25.857 16:28:59 -- nvmf/common.sh@158 -- # true 00:17:25.857 16:28:59 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:25.857 Cannot find device "nvmf_tgt_br2" 00:17:25.857 16:28:59 -- nvmf/common.sh@159 -- # true 00:17:25.857 16:28:59 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:25.857 16:28:59 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:25.857 16:28:59 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:25.857 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:25.857 16:28:59 -- nvmf/common.sh@162 -- # true 00:17:25.857 16:28:59 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:25.857 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:25.857 16:28:59 -- nvmf/common.sh@163 -- # true 00:17:25.857 16:28:59 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:25.857 16:28:59 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:25.857 16:28:59 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:25.857 16:28:59 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:25.858 16:28:59 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:25.858 16:28:59 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:25.858 16:28:59 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:25.858 16:28:59 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:25.858 16:28:59 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:25.858 16:28:59 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:25.858 16:28:59 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:25.858 16:28:59 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:25.858 16:28:59 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:25.858 16:28:59 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:25.858 16:28:59 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:25.858 16:28:59 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:25.858 16:28:59 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:25.858 16:28:59 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:25.858 16:28:59 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:26.116 16:28:59 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:26.116 16:28:59 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:26.116 16:28:59 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:26.116 16:28:59 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:26.116 16:28:59 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:26.116 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:26.116 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:17:26.116 00:17:26.116 --- 10.0.0.2 ping statistics --- 00:17:26.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.116 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:17:26.116 16:28:59 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:26.116 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:26.116 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:17:26.116 00:17:26.116 --- 10.0.0.3 ping statistics --- 00:17:26.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.116 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:17:26.116 16:28:59 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:26.116 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:26.116 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:17:26.116 00:17:26.116 --- 10.0.0.1 ping statistics --- 00:17:26.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.116 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:17:26.116 16:28:59 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:26.116 16:28:59 -- nvmf/common.sh@422 -- # return 0 00:17:26.116 16:28:59 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:26.116 16:28:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:26.116 16:28:59 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:26.116 16:28:59 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:26.116 16:28:59 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:26.116 16:28:59 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:26.116 16:28:59 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:26.116 16:28:59 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:26.116 16:28:59 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:17:26.116 16:28:59 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:17:26.116 16:28:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:26.116 16:28:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:26.116 16:28:59 -- common/autotest_common.sh@10 -- # set +x 00:17:26.116 ************************************ 00:17:26.116 START TEST nvmf_digest_clean 00:17:26.116 ************************************ 00:17:26.116 16:29:00 -- common/autotest_common.sh@1111 -- # run_digest 00:17:26.116 16:29:00 -- host/digest.sh@120 -- # local dsa_initiator 00:17:26.116 16:29:00 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:17:26.116 16:29:00 -- host/digest.sh@121 -- # dsa_initiator=false 00:17:26.116 16:29:00 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:17:26.116 16:29:00 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:17:26.116 16:29:00 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:26.116 16:29:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:26.116 16:29:00 -- common/autotest_common.sh@10 -- # set +x 00:17:26.116 16:29:00 -- nvmf/common.sh@470 -- # nvmfpid=85320 00:17:26.116 16:29:00 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:26.116 16:29:00 -- nvmf/common.sh@471 -- # waitforlisten 85320 00:17:26.116 16:29:00 -- common/autotest_common.sh@817 -- # '[' -z 85320 ']' 00:17:26.116 16:29:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.116 16:29:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:26.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.116 16:29:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.116 16:29:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:26.116 16:29:00 -- common/autotest_common.sh@10 -- # set +x 00:17:26.116 [2024-04-17 16:29:00.105433] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:17:26.116 [2024-04-17 16:29:00.105523] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:26.374 [2024-04-17 16:29:00.245546] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.374 [2024-04-17 16:29:00.375498] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:26.374 [2024-04-17 16:29:00.375847] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:26.374 [2024-04-17 16:29:00.375956] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:26.374 [2024-04-17 16:29:00.375976] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:26.374 [2024-04-17 16:29:00.375986] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:26.374 [2024-04-17 16:29:00.376033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.350 16:29:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:27.350 16:29:01 -- common/autotest_common.sh@850 -- # return 0 00:17:27.350 16:29:01 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:27.350 16:29:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:27.350 16:29:01 -- common/autotest_common.sh@10 -- # set +x 00:17:27.350 16:29:01 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:27.350 16:29:01 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:17:27.350 16:29:01 -- host/digest.sh@126 -- # common_target_config 00:17:27.350 16:29:01 -- host/digest.sh@43 -- # rpc_cmd 00:17:27.350 16:29:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:27.350 16:29:01 -- common/autotest_common.sh@10 -- # set +x 00:17:27.350 null0 00:17:27.350 [2024-04-17 16:29:01.246028] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:27.350 [2024-04-17 16:29:01.270188] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:27.350 16:29:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:27.351 16:29:01 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:17:27.351 16:29:01 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:27.351 16:29:01 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:27.351 16:29:01 -- host/digest.sh@80 -- # rw=randread 00:17:27.351 16:29:01 -- host/digest.sh@80 -- # bs=4096 00:17:27.351 16:29:01 -- host/digest.sh@80 -- # qd=128 00:17:27.351 16:29:01 -- host/digest.sh@80 -- # scan_dsa=false 00:17:27.351 16:29:01 -- host/digest.sh@83 -- # bperfpid=85374 00:17:27.351 16:29:01 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:27.351 16:29:01 -- host/digest.sh@84 -- # waitforlisten 85374 /var/tmp/bperf.sock 00:17:27.351 16:29:01 -- common/autotest_common.sh@817 -- # '[' -z 85374 ']' 00:17:27.351 16:29:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:27.351 16:29:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:27.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:27.351 16:29:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:27.351 16:29:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:27.351 16:29:01 -- common/autotest_common.sh@10 -- # set +x 00:17:27.351 [2024-04-17 16:29:01.329593] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:17:27.351 [2024-04-17 16:29:01.329676] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85374 ] 00:17:27.609 [2024-04-17 16:29:01.465319] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.609 [2024-04-17 16:29:01.581986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:28.542 16:29:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:28.542 16:29:02 -- common/autotest_common.sh@850 -- # return 0 00:17:28.542 16:29:02 -- host/digest.sh@86 -- # false 00:17:28.542 16:29:02 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:28.542 16:29:02 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:28.799 16:29:02 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:28.799 16:29:02 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:29.057 nvme0n1 00:17:29.057 16:29:02 -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:29.057 16:29:02 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:29.057 Running I/O for 2 seconds... 00:17:31.641 00:17:31.641 Latency(us) 00:17:31.641 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.641 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:31.641 nvme0n1 : 2.00 18124.38 70.80 0.00 0.00 7053.15 3276.80 13941.29 00:17:31.641 =================================================================================================================== 00:17:31.641 Total : 18124.38 70.80 0.00 0.00 7053.15 3276.80 13941.29 00:17:31.641 0 00:17:31.641 16:29:05 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:31.641 16:29:05 -- host/digest.sh@93 -- # get_accel_stats 00:17:31.641 16:29:05 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:31.641 | select(.opcode=="crc32c") 00:17:31.641 | "\(.module_name) \(.executed)"' 00:17:31.641 16:29:05 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:31.641 16:29:05 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:31.641 16:29:05 -- host/digest.sh@94 -- # false 00:17:31.641 16:29:05 -- host/digest.sh@94 -- # exp_module=software 00:17:31.641 16:29:05 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:31.641 16:29:05 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:31.641 16:29:05 -- host/digest.sh@98 -- # killprocess 85374 00:17:31.641 16:29:05 -- common/autotest_common.sh@936 -- # '[' -z 85374 ']' 00:17:31.641 16:29:05 -- common/autotest_common.sh@940 -- # kill -0 85374 00:17:31.641 16:29:05 -- common/autotest_common.sh@941 -- # uname 00:17:31.641 16:29:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:31.641 16:29:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85374 00:17:31.641 16:29:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:31.641 killing process with pid 85374 00:17:31.641 16:29:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:31.641 16:29:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85374' 00:17:31.641 Received shutdown signal, test time was about 2.000000 seconds 00:17:31.641 00:17:31.641 Latency(us) 00:17:31.641 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.641 =================================================================================================================== 00:17:31.641 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:31.641 16:29:05 -- common/autotest_common.sh@955 -- # kill 85374 00:17:31.641 16:29:05 -- common/autotest_common.sh@960 -- # wait 85374 00:17:31.900 16:29:05 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:17:31.900 16:29:05 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:31.900 16:29:05 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:31.900 16:29:05 -- host/digest.sh@80 -- # rw=randread 00:17:31.900 16:29:05 -- host/digest.sh@80 -- # bs=131072 00:17:31.900 16:29:05 -- host/digest.sh@80 -- # qd=16 00:17:31.900 16:29:05 -- host/digest.sh@80 -- # scan_dsa=false 00:17:31.900 16:29:05 -- host/digest.sh@83 -- # bperfpid=85461 00:17:31.900 16:29:05 -- host/digest.sh@84 -- # waitforlisten 85461 /var/tmp/bperf.sock 00:17:31.900 16:29:05 -- common/autotest_common.sh@817 -- # '[' -z 85461 ']' 00:17:31.900 16:29:05 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:31.900 16:29:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:31.900 16:29:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:31.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:31.900 16:29:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:31.900 16:29:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:31.901 16:29:05 -- common/autotest_common.sh@10 -- # set +x 00:17:31.901 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:31.901 Zero copy mechanism will not be used. 00:17:31.901 [2024-04-17 16:29:05.740969] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:17:31.901 [2024-04-17 16:29:05.741087] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85461 ] 00:17:31.901 [2024-04-17 16:29:05.874268] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.159 [2024-04-17 16:29:05.993162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:32.726 16:29:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:32.726 16:29:06 -- common/autotest_common.sh@850 -- # return 0 00:17:32.726 16:29:06 -- host/digest.sh@86 -- # false 00:17:32.726 16:29:06 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:32.726 16:29:06 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:33.294 16:29:07 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:33.294 16:29:07 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:33.553 nvme0n1 00:17:33.553 16:29:07 -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:33.553 16:29:07 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:33.553 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:33.553 Zero copy mechanism will not be used. 00:17:33.553 Running I/O for 2 seconds... 00:17:35.455 00:17:35.455 Latency(us) 00:17:35.455 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.455 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:35.455 nvme0n1 : 2.00 8221.60 1027.70 0.00 0.00 1942.44 614.40 3530.01 00:17:35.455 =================================================================================================================== 00:17:35.455 Total : 8221.60 1027.70 0.00 0.00 1942.44 614.40 3530.01 00:17:35.455 0 00:17:35.455 16:29:09 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:35.455 16:29:09 -- host/digest.sh@93 -- # get_accel_stats 00:17:35.455 16:29:09 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:35.455 16:29:09 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:35.455 16:29:09 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:35.455 | select(.opcode=="crc32c") 00:17:35.455 | "\(.module_name) \(.executed)"' 00:17:36.021 16:29:09 -- host/digest.sh@94 -- # false 00:17:36.021 16:29:09 -- host/digest.sh@94 -- # exp_module=software 00:17:36.021 16:29:09 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:36.021 16:29:09 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:36.021 16:29:09 -- host/digest.sh@98 -- # killprocess 85461 00:17:36.021 16:29:09 -- common/autotest_common.sh@936 -- # '[' -z 85461 ']' 00:17:36.021 16:29:09 -- common/autotest_common.sh@940 -- # kill -0 85461 00:17:36.021 16:29:09 -- common/autotest_common.sh@941 -- # uname 00:17:36.022 16:29:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:36.022 16:29:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85461 00:17:36.022 16:29:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:36.022 16:29:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:36.022 16:29:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85461' 00:17:36.022 killing process with pid 85461 00:17:36.022 Received shutdown signal, test time was about 2.000000 seconds 00:17:36.022 00:17:36.022 Latency(us) 00:17:36.022 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.022 =================================================================================================================== 00:17:36.022 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:36.022 16:29:09 -- common/autotest_common.sh@955 -- # kill 85461 00:17:36.022 16:29:09 -- common/autotest_common.sh@960 -- # wait 85461 00:17:36.022 16:29:10 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:17:36.022 16:29:10 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:36.022 16:29:10 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:36.022 16:29:10 -- host/digest.sh@80 -- # rw=randwrite 00:17:36.022 16:29:10 -- host/digest.sh@80 -- # bs=4096 00:17:36.022 16:29:10 -- host/digest.sh@80 -- # qd=128 00:17:36.022 16:29:10 -- host/digest.sh@80 -- # scan_dsa=false 00:17:36.022 16:29:10 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:36.022 16:29:10 -- host/digest.sh@83 -- # bperfpid=85552 00:17:36.022 16:29:10 -- host/digest.sh@84 -- # waitforlisten 85552 /var/tmp/bperf.sock 00:17:36.022 16:29:10 -- common/autotest_common.sh@817 -- # '[' -z 85552 ']' 00:17:36.022 16:29:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:36.022 16:29:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:36.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:36.022 16:29:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:36.022 16:29:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:36.022 16:29:10 -- common/autotest_common.sh@10 -- # set +x 00:17:36.279 [2024-04-17 16:29:10.103071] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:17:36.279 [2024-04-17 16:29:10.103164] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85552 ] 00:17:36.279 [2024-04-17 16:29:10.236440] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.537 [2024-04-17 16:29:10.349718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:37.481 16:29:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:37.481 16:29:11 -- common/autotest_common.sh@850 -- # return 0 00:17:37.481 16:29:11 -- host/digest.sh@86 -- # false 00:17:37.481 16:29:11 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:37.481 16:29:11 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:37.481 16:29:11 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:37.481 16:29:11 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:38.047 nvme0n1 00:17:38.047 16:29:11 -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:38.047 16:29:11 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:38.047 Running I/O for 2 seconds... 00:17:39.961 00:17:39.961 Latency(us) 00:17:39.961 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.961 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:39.961 nvme0n1 : 2.01 21639.94 84.53 0.00 0.00 5907.78 3023.59 10962.39 00:17:39.961 =================================================================================================================== 00:17:39.961 Total : 21639.94 84.53 0.00 0.00 5907.78 3023.59 10962.39 00:17:39.961 0 00:17:39.961 16:29:13 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:39.961 16:29:13 -- host/digest.sh@93 -- # get_accel_stats 00:17:39.961 16:29:13 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:39.961 16:29:13 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:39.961 16:29:13 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:39.961 | select(.opcode=="crc32c") 00:17:39.961 | "\(.module_name) \(.executed)"' 00:17:40.218 16:29:14 -- host/digest.sh@94 -- # false 00:17:40.218 16:29:14 -- host/digest.sh@94 -- # exp_module=software 00:17:40.218 16:29:14 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:40.218 16:29:14 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:40.218 16:29:14 -- host/digest.sh@98 -- # killprocess 85552 00:17:40.218 16:29:14 -- common/autotest_common.sh@936 -- # '[' -z 85552 ']' 00:17:40.218 16:29:14 -- common/autotest_common.sh@940 -- # kill -0 85552 00:17:40.218 16:29:14 -- common/autotest_common.sh@941 -- # uname 00:17:40.218 16:29:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:40.218 16:29:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85552 00:17:40.218 killing process with pid 85552 00:17:40.218 Received shutdown signal, test time was about 2.000000 seconds 00:17:40.218 00:17:40.218 Latency(us) 00:17:40.218 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.218 =================================================================================================================== 00:17:40.218 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:40.218 16:29:14 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:40.218 16:29:14 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:40.218 16:29:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85552' 00:17:40.218 16:29:14 -- common/autotest_common.sh@955 -- # kill 85552 00:17:40.218 16:29:14 -- common/autotest_common.sh@960 -- # wait 85552 00:17:40.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:40.485 16:29:14 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:17:40.485 16:29:14 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:40.486 16:29:14 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:40.486 16:29:14 -- host/digest.sh@80 -- # rw=randwrite 00:17:40.486 16:29:14 -- host/digest.sh@80 -- # bs=131072 00:17:40.486 16:29:14 -- host/digest.sh@80 -- # qd=16 00:17:40.486 16:29:14 -- host/digest.sh@80 -- # scan_dsa=false 00:17:40.486 16:29:14 -- host/digest.sh@83 -- # bperfpid=85648 00:17:40.486 16:29:14 -- host/digest.sh@84 -- # waitforlisten 85648 /var/tmp/bperf.sock 00:17:40.486 16:29:14 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:40.486 16:29:14 -- common/autotest_common.sh@817 -- # '[' -z 85648 ']' 00:17:40.486 16:29:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:40.486 16:29:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:40.486 16:29:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:40.486 16:29:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:40.486 16:29:14 -- common/autotest_common.sh@10 -- # set +x 00:17:40.751 [2024-04-17 16:29:14.551428] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:17:40.751 [2024-04-17 16:29:14.551850] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85648 ] 00:17:40.751 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:40.751 Zero copy mechanism will not be used. 00:17:40.751 [2024-04-17 16:29:14.696134] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.009 [2024-04-17 16:29:14.810502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:41.574 16:29:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:41.574 16:29:15 -- common/autotest_common.sh@850 -- # return 0 00:17:41.574 16:29:15 -- host/digest.sh@86 -- # false 00:17:41.574 16:29:15 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:41.574 16:29:15 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:42.140 16:29:15 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:42.140 16:29:15 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:42.398 nvme0n1 00:17:42.398 16:29:16 -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:42.398 16:29:16 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:42.398 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:42.398 Zero copy mechanism will not be used. 00:17:42.398 Running I/O for 2 seconds... 00:17:44.935 00:17:44.935 Latency(us) 00:17:44.935 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.935 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:44.935 nvme0n1 : 2.00 6604.36 825.54 0.00 0.00 2417.17 1906.50 12392.26 00:17:44.935 =================================================================================================================== 00:17:44.935 Total : 6604.36 825.54 0.00 0.00 2417.17 1906.50 12392.26 00:17:44.935 0 00:17:44.935 16:29:18 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:44.935 16:29:18 -- host/digest.sh@93 -- # get_accel_stats 00:17:44.935 16:29:18 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:44.935 | select(.opcode=="crc32c") 00:17:44.935 | "\(.module_name) \(.executed)"' 00:17:44.935 16:29:18 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:44.935 16:29:18 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:44.935 16:29:18 -- host/digest.sh@94 -- # false 00:17:44.935 16:29:18 -- host/digest.sh@94 -- # exp_module=software 00:17:44.935 16:29:18 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:44.935 16:29:18 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:44.935 16:29:18 -- host/digest.sh@98 -- # killprocess 85648 00:17:44.935 16:29:18 -- common/autotest_common.sh@936 -- # '[' -z 85648 ']' 00:17:44.935 16:29:18 -- common/autotest_common.sh@940 -- # kill -0 85648 00:17:44.935 16:29:18 -- common/autotest_common.sh@941 -- # uname 00:17:44.935 16:29:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:44.935 16:29:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85648 00:17:44.935 16:29:18 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:44.935 16:29:18 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:44.935 killing process with pid 85648 00:17:44.935 16:29:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85648' 00:17:44.935 Received shutdown signal, test time was about 2.000000 seconds 00:17:44.935 00:17:44.935 Latency(us) 00:17:44.935 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.935 =================================================================================================================== 00:17:44.935 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:44.935 16:29:18 -- common/autotest_common.sh@955 -- # kill 85648 00:17:44.935 16:29:18 -- common/autotest_common.sh@960 -- # wait 85648 00:17:45.193 16:29:18 -- host/digest.sh@132 -- # killprocess 85320 00:17:45.193 16:29:18 -- common/autotest_common.sh@936 -- # '[' -z 85320 ']' 00:17:45.193 16:29:18 -- common/autotest_common.sh@940 -- # kill -0 85320 00:17:45.193 16:29:18 -- common/autotest_common.sh@941 -- # uname 00:17:45.193 16:29:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:45.193 16:29:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85320 00:17:45.193 16:29:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:45.193 16:29:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:45.193 killing process with pid 85320 00:17:45.193 16:29:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85320' 00:17:45.193 16:29:19 -- common/autotest_common.sh@955 -- # kill 85320 00:17:45.193 16:29:19 -- common/autotest_common.sh@960 -- # wait 85320 00:17:45.451 00:17:45.451 real 0m19.221s 00:17:45.451 user 0m36.893s 00:17:45.451 sys 0m4.673s 00:17:45.451 16:29:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:45.451 16:29:19 -- common/autotest_common.sh@10 -- # set +x 00:17:45.451 ************************************ 00:17:45.451 END TEST nvmf_digest_clean 00:17:45.451 ************************************ 00:17:45.451 16:29:19 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:17:45.451 16:29:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:45.451 16:29:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:45.451 16:29:19 -- common/autotest_common.sh@10 -- # set +x 00:17:45.451 ************************************ 00:17:45.451 START TEST nvmf_digest_error 00:17:45.451 ************************************ 00:17:45.451 16:29:19 -- common/autotest_common.sh@1111 -- # run_digest_error 00:17:45.451 16:29:19 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:17:45.451 16:29:19 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:45.451 16:29:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:45.451 16:29:19 -- common/autotest_common.sh@10 -- # set +x 00:17:45.451 16:29:19 -- nvmf/common.sh@470 -- # nvmfpid=85768 00:17:45.451 16:29:19 -- nvmf/common.sh@471 -- # waitforlisten 85768 00:17:45.451 16:29:19 -- common/autotest_common.sh@817 -- # '[' -z 85768 ']' 00:17:45.451 16:29:19 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:45.451 16:29:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.451 16:29:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:45.451 16:29:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.451 16:29:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:45.451 16:29:19 -- common/autotest_common.sh@10 -- # set +x 00:17:45.451 [2024-04-17 16:29:19.437280] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:17:45.451 [2024-04-17 16:29:19.437365] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:45.709 [2024-04-17 16:29:19.574811] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.709 [2024-04-17 16:29:19.703477] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:45.709 [2024-04-17 16:29:19.703537] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:45.709 [2024-04-17 16:29:19.703551] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:45.709 [2024-04-17 16:29:19.703563] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:45.709 [2024-04-17 16:29:19.703572] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:45.709 [2024-04-17 16:29:19.703610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.695 16:29:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:46.695 16:29:20 -- common/autotest_common.sh@850 -- # return 0 00:17:46.695 16:29:20 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:46.695 16:29:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:46.695 16:29:20 -- common/autotest_common.sh@10 -- # set +x 00:17:46.695 16:29:20 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:46.695 16:29:20 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:17:46.695 16:29:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:46.695 16:29:20 -- common/autotest_common.sh@10 -- # set +x 00:17:46.695 [2024-04-17 16:29:20.464423] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:17:46.695 16:29:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:46.695 16:29:20 -- host/digest.sh@105 -- # common_target_config 00:17:46.695 16:29:20 -- host/digest.sh@43 -- # rpc_cmd 00:17:46.695 16:29:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:46.695 16:29:20 -- common/autotest_common.sh@10 -- # set +x 00:17:46.695 null0 00:17:46.695 [2024-04-17 16:29:20.576021] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:46.695 [2024-04-17 16:29:20.600186] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:46.695 16:29:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:46.695 16:29:20 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:17:46.695 16:29:20 -- host/digest.sh@54 -- # local rw bs qd 00:17:46.695 16:29:20 -- host/digest.sh@56 -- # rw=randread 00:17:46.695 16:29:20 -- host/digest.sh@56 -- # bs=4096 00:17:46.695 16:29:20 -- host/digest.sh@56 -- # qd=128 00:17:46.695 16:29:20 -- host/digest.sh@58 -- # bperfpid=85818 00:17:46.695 16:29:20 -- host/digest.sh@60 -- # waitforlisten 85818 /var/tmp/bperf.sock 00:17:46.695 16:29:20 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:17:46.695 16:29:20 -- common/autotest_common.sh@817 -- # '[' -z 85818 ']' 00:17:46.695 16:29:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:46.695 16:29:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:46.695 16:29:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:46.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:46.695 16:29:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:46.695 16:29:20 -- common/autotest_common.sh@10 -- # set +x 00:17:46.695 [2024-04-17 16:29:20.660237] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:17:46.696 [2024-04-17 16:29:20.660346] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85818 ] 00:17:46.954 [2024-04-17 16:29:20.799363] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.954 [2024-04-17 16:29:20.912002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.888 16:29:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:47.888 16:29:21 -- common/autotest_common.sh@850 -- # return 0 00:17:47.888 16:29:21 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:47.888 16:29:21 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:48.146 16:29:21 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:48.146 16:29:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:48.146 16:29:21 -- common/autotest_common.sh@10 -- # set +x 00:17:48.146 16:29:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:48.146 16:29:21 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:48.146 16:29:21 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:48.404 nvme0n1 00:17:48.404 16:29:22 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:48.404 16:29:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:48.404 16:29:22 -- common/autotest_common.sh@10 -- # set +x 00:17:48.404 16:29:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:48.404 16:29:22 -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:48.404 16:29:22 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:48.404 Running I/O for 2 seconds... 00:17:48.404 [2024-04-17 16:29:22.424440] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:48.404 [2024-04-17 16:29:22.424501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.404 [2024-04-17 16:29:22.424517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.404 [2024-04-17 16:29:22.438766] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:48.404 [2024-04-17 16:29:22.438826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.404 [2024-04-17 16:29:22.438841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.664 [2024-04-17 16:29:22.453210] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:48.664 [2024-04-17 16:29:22.453265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.664 [2024-04-17 16:29:22.453279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.664 [2024-04-17 16:29:22.466751] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:48.664 [2024-04-17 16:29:22.466808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.664 [2024-04-17 16:29:22.466822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.664 [2024-04-17 16:29:22.481202] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:48.664 [2024-04-17 16:29:22.481249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.664 [2024-04-17 16:29:22.481262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.664 [2024-04-17 16:29:22.494554] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:48.664 [2024-04-17 16:29:22.494596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.664 [2024-04-17 16:29:22.494610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.664 [2024-04-17 16:29:22.509089] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:48.664 [2024-04-17 16:29:22.509125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.664 [2024-04-17 16:29:22.509141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.664 [2024-04-17 16:29:22.520662] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:48.664 [2024-04-17 16:29:22.520701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.664 [2024-04-17 16:29:22.520715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.664 [2024-04-17 16:29:22.535275] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:48.664 [2024-04-17 16:29:22.535318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.664 [2024-04-17 16:29:22.535333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.664 [2024-04-17 16:29:22.546935] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:48.664 [2024-04-17 16:29:22.546976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.664 [2024-04-17 16:29:22.546990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.664 [2024-04-17 16:29:22.562416] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:48.664 [2024-04-17 16:29:22.562458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.664 [2024-04-17 16:29:22.562471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.664 [2024-04-17 16:29:22.575537] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:48.664 [2024-04-17 16:29:22.575577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.664 [2024-04-17 16:29:22.575590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.664 [2024-04-17 16:29:22.589948] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:48.664 [2024-04-17 16:29:22.589995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.664 [2024-04-17 16:29:22.590008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.664 [2024-04-17 16:29:22.601726] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:48.664 [2024-04-17 16:29:22.601786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.664 [2024-04-17 16:29:22.601802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.664 [2024-04-17 16:29:22.616174] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:48.664 [2024-04-17 16:29:22.616226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.664 [2024-04-17 16:29:22.616241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.664 [2024-04-17 16:29:22.628830] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:48.664 [2024-04-17 16:29:22.628882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.664 [2024-04-17 16:29:22.628895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.664 [2024-04-17 16:29:22.642930] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:48.664 [2024-04-17 16:29:22.642977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.664 [2024-04-17 16:29:22.642990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.664 [2024-04-17 16:29:22.656970] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:48.664 [2024-04-17 16:29:22.657013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.664 [2024-04-17 16:29:22.657027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.664 [2024-04-17 16:29:22.671459] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:48.664 [2024-04-17 16:29:22.671506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.664 [2024-04-17 16:29:22.671520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.665 [2024-04-17 16:29:22.683681] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:48.665 [2024-04-17 16:29:22.683722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.665 [2024-04-17 16:29:22.683736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.665 [2024-04-17 16:29:22.697190] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:48.665 [2024-04-17 16:29:22.697232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.665 [2024-04-17 16:29:22.697246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.924 [2024-04-17 16:29:22.709648] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:48.924 [2024-04-17 16:29:22.709692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.924 [2024-04-17 16:29:22.709706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.924 [2024-04-17 16:29:22.724609] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:48.924 [2024-04-17 16:29:22.724656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.924 [2024-04-17 16:29:22.724670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.924 [2024-04-17 16:29:22.738127] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:48.924 [2024-04-17 16:29:22.738172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.924 [2024-04-17 16:29:22.738187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.924 [2024-04-17 16:29:22.752179] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:48.924 [2024-04-17 16:29:22.752225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.924 [2024-04-17 16:29:22.752238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.924 [2024-04-17 16:29:22.766537] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:48.924 [2024-04-17 16:29:22.766582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.924 [2024-04-17 16:29:22.766596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.924 [2024-04-17 16:29:22.777969] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:48.924 [2024-04-17 16:29:22.778007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:16809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.924 [2024-04-17 16:29:22.778021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.924 [2024-04-17 16:29:22.792100] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:48.924 [2024-04-17 16:29:22.792141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.924 [2024-04-17 16:29:22.792155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.924 [2024-04-17 16:29:22.805362] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:48.924 [2024-04-17 16:29:22.805399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.924 [2024-04-17 16:29:22.805413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.924 [2024-04-17 16:29:22.820903] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:48.924 [2024-04-17 16:29:22.820947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.924 [2024-04-17 16:29:22.820962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.924 [2024-04-17 16:29:22.833979] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:48.924 [2024-04-17 16:29:22.834015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.924 [2024-04-17 16:29:22.834029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.924 [2024-04-17 16:29:22.848360] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:48.924 [2024-04-17 16:29:22.848397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.924 [2024-04-17 16:29:22.848410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.924 [2024-04-17 16:29:22.862481] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:48.924 [2024-04-17 16:29:22.862538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:23192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.924 [2024-04-17 16:29:22.862553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.924 [2024-04-17 16:29:22.874597] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:48.924 [2024-04-17 16:29:22.874639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.924 [2024-04-17 16:29:22.874652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.924 [2024-04-17 16:29:22.889208] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:48.924 [2024-04-17 16:29:22.889257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:25528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.924 [2024-04-17 16:29:22.889271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.924 [2024-04-17 16:29:22.903116] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:48.924 [2024-04-17 16:29:22.903157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.924 [2024-04-17 16:29:22.903170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.924 [2024-04-17 16:29:22.917891] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:48.924 [2024-04-17 16:29:22.917940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.924 [2024-04-17 16:29:22.917954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.924 [2024-04-17 16:29:22.932850] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:48.924 [2024-04-17 16:29:22.932899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.924 [2024-04-17 16:29:22.932912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.924 [2024-04-17 16:29:22.945622] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:48.924 [2024-04-17 16:29:22.945667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.924 [2024-04-17 16:29:22.945680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.924 [2024-04-17 16:29:22.960314] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:48.924 [2024-04-17 16:29:22.960364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.924 [2024-04-17 16:29:22.960378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.183 [2024-04-17 16:29:22.975433] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.183 [2024-04-17 16:29:22.975483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.183 [2024-04-17 16:29:22.975497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.183 [2024-04-17 16:29:22.987590] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.183 [2024-04-17 16:29:22.987639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.183 [2024-04-17 16:29:22.987654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.183 [2024-04-17 16:29:23.003171] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.183 [2024-04-17 16:29:23.003224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.183 [2024-04-17 16:29:23.003238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.183 [2024-04-17 16:29:23.017433] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.183 [2024-04-17 16:29:23.017480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.183 [2024-04-17 16:29:23.017494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.183 [2024-04-17 16:29:23.031002] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.183 [2024-04-17 16:29:23.031045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.183 [2024-04-17 16:29:23.031058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.183 [2024-04-17 16:29:23.044977] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.183 [2024-04-17 16:29:23.045020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.183 [2024-04-17 16:29:23.045034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.183 [2024-04-17 16:29:23.057591] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.184 [2024-04-17 16:29:23.057632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.184 [2024-04-17 16:29:23.057646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.184 [2024-04-17 16:29:23.071991] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.184 [2024-04-17 16:29:23.072034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.184 [2024-04-17 16:29:23.072048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.184 [2024-04-17 16:29:23.086743] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.184 [2024-04-17 16:29:23.086797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.184 [2024-04-17 16:29:23.086811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.184 [2024-04-17 16:29:23.102661] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.184 [2024-04-17 16:29:23.102706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.184 [2024-04-17 16:29:23.102719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.184 [2024-04-17 16:29:23.116548] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.184 [2024-04-17 16:29:23.116597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.184 [2024-04-17 16:29:23.116612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.184 [2024-04-17 16:29:23.129147] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.184 [2024-04-17 16:29:23.129190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.184 [2024-04-17 16:29:23.129214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.184 [2024-04-17 16:29:23.143673] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.184 [2024-04-17 16:29:23.143722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.184 [2024-04-17 16:29:23.143737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.184 [2024-04-17 16:29:23.155866] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.184 [2024-04-17 16:29:23.155917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.184 [2024-04-17 16:29:23.155931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.184 [2024-04-17 16:29:23.170567] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.184 [2024-04-17 16:29:23.170622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.184 [2024-04-17 16:29:23.170637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.184 [2024-04-17 16:29:23.183042] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.184 [2024-04-17 16:29:23.183104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.184 [2024-04-17 16:29:23.183118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.184 [2024-04-17 16:29:23.197656] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.184 [2024-04-17 16:29:23.197723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.184 [2024-04-17 16:29:23.197738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.184 [2024-04-17 16:29:23.212310] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.184 [2024-04-17 16:29:23.212376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.184 [2024-04-17 16:29:23.212390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.184 [2024-04-17 16:29:23.227313] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.184 [2024-04-17 16:29:23.227378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.184 [2024-04-17 16:29:23.227393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.444 [2024-04-17 16:29:23.239558] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.444 [2024-04-17 16:29:23.239626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.444 [2024-04-17 16:29:23.239640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.444 [2024-04-17 16:29:23.256281] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.444 [2024-04-17 16:29:23.256332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.444 [2024-04-17 16:29:23.256346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.444 [2024-04-17 16:29:23.269981] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.444 [2024-04-17 16:29:23.270030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.444 [2024-04-17 16:29:23.270044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.444 [2024-04-17 16:29:23.283393] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.444 [2024-04-17 16:29:23.283444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.444 [2024-04-17 16:29:23.283457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.444 [2024-04-17 16:29:23.297256] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.444 [2024-04-17 16:29:23.297318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.444 [2024-04-17 16:29:23.297333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.444 [2024-04-17 16:29:23.310522] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.444 [2024-04-17 16:29:23.310572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.444 [2024-04-17 16:29:23.310586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.444 [2024-04-17 16:29:23.322821] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.444 [2024-04-17 16:29:23.322865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.444 [2024-04-17 16:29:23.322880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.444 [2024-04-17 16:29:23.337371] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.444 [2024-04-17 16:29:23.337415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.444 [2024-04-17 16:29:23.337428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.444 [2024-04-17 16:29:23.351469] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.444 [2024-04-17 16:29:23.351515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.444 [2024-04-17 16:29:23.351529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.444 [2024-04-17 16:29:23.363870] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.444 [2024-04-17 16:29:23.363933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.444 [2024-04-17 16:29:23.363947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.444 [2024-04-17 16:29:23.378361] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.444 [2024-04-17 16:29:23.378421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.444 [2024-04-17 16:29:23.378436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.444 [2024-04-17 16:29:23.393153] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.444 [2024-04-17 16:29:23.393212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.444 [2024-04-17 16:29:23.393227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.444 [2024-04-17 16:29:23.407394] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.444 [2024-04-17 16:29:23.407450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.444 [2024-04-17 16:29:23.407464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.444 [2024-04-17 16:29:23.421692] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.444 [2024-04-17 16:29:23.421744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.444 [2024-04-17 16:29:23.421759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.444 [2024-04-17 16:29:23.435830] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.444 [2024-04-17 16:29:23.435876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.444 [2024-04-17 16:29:23.435889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.444 [2024-04-17 16:29:23.449138] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.444 [2024-04-17 16:29:23.449181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.444 [2024-04-17 16:29:23.449196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.444 [2024-04-17 16:29:23.463385] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.444 [2024-04-17 16:29:23.463439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.444 [2024-04-17 16:29:23.463454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.444 [2024-04-17 16:29:23.477015] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.444 [2024-04-17 16:29:23.477074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.444 [2024-04-17 16:29:23.477089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.703 [2024-04-17 16:29:23.490484] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.703 [2024-04-17 16:29:23.490549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.703 [2024-04-17 16:29:23.490563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.703 [2024-04-17 16:29:23.503719] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.703 [2024-04-17 16:29:23.503797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.703 [2024-04-17 16:29:23.503813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.703 [2024-04-17 16:29:23.518557] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.703 [2024-04-17 16:29:23.518607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.703 [2024-04-17 16:29:23.518622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.703 [2024-04-17 16:29:23.532760] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.703 [2024-04-17 16:29:23.532824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.703 [2024-04-17 16:29:23.532840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.704 [2024-04-17 16:29:23.544287] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.704 [2024-04-17 16:29:23.544343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.704 [2024-04-17 16:29:23.544358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.704 [2024-04-17 16:29:23.560079] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.704 [2024-04-17 16:29:23.560142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.704 [2024-04-17 16:29:23.560156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.704 [2024-04-17 16:29:23.573989] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.704 [2024-04-17 16:29:23.574039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.704 [2024-04-17 16:29:23.574054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.704 [2024-04-17 16:29:23.587394] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.704 [2024-04-17 16:29:23.587456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.704 [2024-04-17 16:29:23.587471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.704 [2024-04-17 16:29:23.603610] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.704 [2024-04-17 16:29:23.603668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.704 [2024-04-17 16:29:23.603682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.704 [2024-04-17 16:29:23.618308] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.704 [2024-04-17 16:29:23.618372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.704 [2024-04-17 16:29:23.618387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.704 [2024-04-17 16:29:23.629955] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.704 [2024-04-17 16:29:23.630012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.704 [2024-04-17 16:29:23.630027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.704 [2024-04-17 16:29:23.642665] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.704 [2024-04-17 16:29:23.642726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.704 [2024-04-17 16:29:23.642740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.704 [2024-04-17 16:29:23.658521] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.704 [2024-04-17 16:29:23.658578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:48 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.704 [2024-04-17 16:29:23.658593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.704 [2024-04-17 16:29:23.672572] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.704 [2024-04-17 16:29:23.672617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.704 [2024-04-17 16:29:23.672632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.704 [2024-04-17 16:29:23.685000] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.704 [2024-04-17 16:29:23.685058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.704 [2024-04-17 16:29:23.685072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.704 [2024-04-17 16:29:23.700049] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.704 [2024-04-17 16:29:23.700109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.704 [2024-04-17 16:29:23.700123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.704 [2024-04-17 16:29:23.713147] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.704 [2024-04-17 16:29:23.713205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.704 [2024-04-17 16:29:23.713219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.704 [2024-04-17 16:29:23.727749] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.704 [2024-04-17 16:29:23.727818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.704 [2024-04-17 16:29:23.727834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.704 [2024-04-17 16:29:23.742914] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.704 [2024-04-17 16:29:23.742966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.704 [2024-04-17 16:29:23.742981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.964 [2024-04-17 16:29:23.754404] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.964 [2024-04-17 16:29:23.754454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.964 [2024-04-17 16:29:23.754469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.964 [2024-04-17 16:29:23.768885] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.964 [2024-04-17 16:29:23.768950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.964 [2024-04-17 16:29:23.768965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.964 [2024-04-17 16:29:23.783676] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.964 [2024-04-17 16:29:23.783746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.964 [2024-04-17 16:29:23.783761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.964 [2024-04-17 16:29:23.798459] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.964 [2024-04-17 16:29:23.798516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.964 [2024-04-17 16:29:23.798531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.964 [2024-04-17 16:29:23.812124] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.964 [2024-04-17 16:29:23.812179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.964 [2024-04-17 16:29:23.812197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.964 [2024-04-17 16:29:23.825893] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.964 [2024-04-17 16:29:23.825953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.964 [2024-04-17 16:29:23.825967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.964 [2024-04-17 16:29:23.841091] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.964 [2024-04-17 16:29:23.841159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:8177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.964 [2024-04-17 16:29:23.841174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.964 [2024-04-17 16:29:23.855408] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.964 [2024-04-17 16:29:23.855471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.964 [2024-04-17 16:29:23.855485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.964 [2024-04-17 16:29:23.869045] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.964 [2024-04-17 16:29:23.869108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.964 [2024-04-17 16:29:23.869123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.964 [2024-04-17 16:29:23.882540] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.964 [2024-04-17 16:29:23.882599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.964 [2024-04-17 16:29:23.882614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.964 [2024-04-17 16:29:23.896277] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.964 [2024-04-17 16:29:23.896330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.964 [2024-04-17 16:29:23.896343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.964 [2024-04-17 16:29:23.907869] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.964 [2024-04-17 16:29:23.907924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.964 [2024-04-17 16:29:23.907940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.964 [2024-04-17 16:29:23.923047] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.964 [2024-04-17 16:29:23.923112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.964 [2024-04-17 16:29:23.923126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.964 [2024-04-17 16:29:23.937266] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.964 [2024-04-17 16:29:23.937331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.964 [2024-04-17 16:29:23.937347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.964 [2024-04-17 16:29:23.951319] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.964 [2024-04-17 16:29:23.951384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.964 [2024-04-17 16:29:23.951398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.964 [2024-04-17 16:29:23.965047] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.964 [2024-04-17 16:29:23.965099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.964 [2024-04-17 16:29:23.965114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.964 [2024-04-17 16:29:23.979590] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.964 [2024-04-17 16:29:23.979654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.964 [2024-04-17 16:29:23.979669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.964 [2024-04-17 16:29:23.993092] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.964 [2024-04-17 16:29:23.993156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.964 [2024-04-17 16:29:23.993171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.964 [2024-04-17 16:29:24.008662] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:49.964 [2024-04-17 16:29:24.008728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.964 [2024-04-17 16:29:24.008742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.224 [2024-04-17 16:29:24.023284] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:50.224 [2024-04-17 16:29:24.023348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.224 [2024-04-17 16:29:24.023362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.224 [2024-04-17 16:29:24.035270] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:50.224 [2024-04-17 16:29:24.035330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.224 [2024-04-17 16:29:24.035345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.224 [2024-04-17 16:29:24.048510] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:50.224 [2024-04-17 16:29:24.048569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.224 [2024-04-17 16:29:24.048584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.224 [2024-04-17 16:29:24.064320] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:50.224 [2024-04-17 16:29:24.064372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.224 [2024-04-17 16:29:24.064386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.224 [2024-04-17 16:29:24.079247] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:50.224 [2024-04-17 16:29:24.079312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.224 [2024-04-17 16:29:24.079328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.224 [2024-04-17 16:29:24.092596] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:50.224 [2024-04-17 16:29:24.092660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.224 [2024-04-17 16:29:24.092674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.224 [2024-04-17 16:29:24.105558] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:50.224 [2024-04-17 16:29:24.105623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.224 [2024-04-17 16:29:24.105638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.224 [2024-04-17 16:29:24.121287] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:50.224 [2024-04-17 16:29:24.121355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.224 [2024-04-17 16:29:24.121370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.225 [2024-04-17 16:29:24.133766] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:50.225 [2024-04-17 16:29:24.133841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.225 [2024-04-17 16:29:24.133865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.225 [2024-04-17 16:29:24.149216] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:50.225 [2024-04-17 16:29:24.149279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.225 [2024-04-17 16:29:24.149294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.225 [2024-04-17 16:29:24.163369] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:50.225 [2024-04-17 16:29:24.163423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.225 [2024-04-17 16:29:24.163437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.225 [2024-04-17 16:29:24.177227] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:50.225 [2024-04-17 16:29:24.177296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.225 [2024-04-17 16:29:24.177310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.225 [2024-04-17 16:29:24.188305] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:50.225 [2024-04-17 16:29:24.188353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.225 [2024-04-17 16:29:24.188368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.225 [2024-04-17 16:29:24.204625] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:50.225 [2024-04-17 16:29:24.204681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.225 [2024-04-17 16:29:24.204696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.225 [2024-04-17 16:29:24.218997] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:50.225 [2024-04-17 16:29:24.219049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.225 [2024-04-17 16:29:24.219063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.225 [2024-04-17 16:29:24.231950] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:50.225 [2024-04-17 16:29:24.231995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.225 [2024-04-17 16:29:24.232009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.225 [2024-04-17 16:29:24.244522] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:50.225 [2024-04-17 16:29:24.244570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.225 [2024-04-17 16:29:24.244583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.225 [2024-04-17 16:29:24.258599] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:50.225 [2024-04-17 16:29:24.258645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.225 [2024-04-17 16:29:24.258658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.484 [2024-04-17 16:29:24.272852] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:50.484 [2024-04-17 16:29:24.272898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.484 [2024-04-17 16:29:24.272912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.484 [2024-04-17 16:29:24.285378] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:50.484 [2024-04-17 16:29:24.285425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.484 [2024-04-17 16:29:24.285439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.484 [2024-04-17 16:29:24.299470] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:50.484 [2024-04-17 16:29:24.299522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.484 [2024-04-17 16:29:24.299537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.484 [2024-04-17 16:29:24.314116] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:50.484 [2024-04-17 16:29:24.314171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.484 [2024-04-17 16:29:24.314185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.484 [2024-04-17 16:29:24.325174] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:50.484 [2024-04-17 16:29:24.325224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.484 [2024-04-17 16:29:24.325238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.484 [2024-04-17 16:29:24.339099] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:50.484 [2024-04-17 16:29:24.339143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.484 [2024-04-17 16:29:24.339157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.484 [2024-04-17 16:29:24.353214] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:50.484 [2024-04-17 16:29:24.353265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.484 [2024-04-17 16:29:24.353279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.484 [2024-04-17 16:29:24.368256] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:50.484 [2024-04-17 16:29:24.368310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.484 [2024-04-17 16:29:24.368326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.484 [2024-04-17 16:29:24.382728] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:50.484 [2024-04-17 16:29:24.382794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:14103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.484 [2024-04-17 16:29:24.382810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.484 [2024-04-17 16:29:24.392984] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20afb40) 00:17:50.484 [2024-04-17 16:29:24.393029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.484 [2024-04-17 16:29:24.393043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.484 00:17:50.484 Latency(us) 00:17:50.484 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.484 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:50.484 nvme0n1 : 2.00 18291.14 71.45 0.00 0.00 6989.78 3515.11 18945.86 00:17:50.484 =================================================================================================================== 00:17:50.484 Total : 18291.14 71.45 0.00 0.00 6989.78 3515.11 18945.86 00:17:50.484 0 00:17:50.484 16:29:24 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:50.484 16:29:24 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:50.484 16:29:24 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:50.484 16:29:24 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:50.484 | .driver_specific 00:17:50.484 | .nvme_error 00:17:50.484 | .status_code 00:17:50.484 | .command_transient_transport_error' 00:17:50.743 16:29:24 -- host/digest.sh@71 -- # (( 143 > 0 )) 00:17:50.743 16:29:24 -- host/digest.sh@73 -- # killprocess 85818 00:17:50.743 16:29:24 -- common/autotest_common.sh@936 -- # '[' -z 85818 ']' 00:17:50.743 16:29:24 -- common/autotest_common.sh@940 -- # kill -0 85818 00:17:50.743 16:29:24 -- common/autotest_common.sh@941 -- # uname 00:17:50.743 16:29:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:50.743 16:29:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85818 00:17:50.743 killing process with pid 85818 00:17:50.743 Received shutdown signal, test time was about 2.000000 seconds 00:17:50.743 00:17:50.743 Latency(us) 00:17:50.743 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.743 =================================================================================================================== 00:17:50.743 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:50.743 16:29:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:50.743 16:29:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:50.743 16:29:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85818' 00:17:50.743 16:29:24 -- common/autotest_common.sh@955 -- # kill 85818 00:17:50.743 16:29:24 -- common/autotest_common.sh@960 -- # wait 85818 00:17:51.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:51.001 16:29:25 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:17:51.001 16:29:25 -- host/digest.sh@54 -- # local rw bs qd 00:17:51.001 16:29:25 -- host/digest.sh@56 -- # rw=randread 00:17:51.001 16:29:25 -- host/digest.sh@56 -- # bs=131072 00:17:51.001 16:29:25 -- host/digest.sh@56 -- # qd=16 00:17:51.001 16:29:25 -- host/digest.sh@58 -- # bperfpid=85908 00:17:51.001 16:29:25 -- host/digest.sh@60 -- # waitforlisten 85908 /var/tmp/bperf.sock 00:17:51.001 16:29:25 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:17:51.001 16:29:25 -- common/autotest_common.sh@817 -- # '[' -z 85908 ']' 00:17:51.001 16:29:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:51.001 16:29:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:51.001 16:29:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:51.001 16:29:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:51.001 16:29:25 -- common/autotest_common.sh@10 -- # set +x 00:17:51.259 [2024-04-17 16:29:25.073325] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:17:51.259 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:51.259 Zero copy mechanism will not be used. 00:17:51.259 [2024-04-17 16:29:25.073456] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85908 ] 00:17:51.259 [2024-04-17 16:29:25.216579] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.517 [2024-04-17 16:29:25.334325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:52.082 16:29:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:52.082 16:29:26 -- common/autotest_common.sh@850 -- # return 0 00:17:52.082 16:29:26 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:52.082 16:29:26 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:52.340 16:29:26 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:52.340 16:29:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:52.340 16:29:26 -- common/autotest_common.sh@10 -- # set +x 00:17:52.340 16:29:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:52.340 16:29:26 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:52.340 16:29:26 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:52.905 nvme0n1 00:17:52.905 16:29:26 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:52.905 16:29:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:52.905 16:29:26 -- common/autotest_common.sh@10 -- # set +x 00:17:52.905 16:29:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:52.905 16:29:26 -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:52.905 16:29:26 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:52.905 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:52.905 Zero copy mechanism will not be used. 00:17:52.905 Running I/O for 2 seconds... 00:17:52.905 [2024-04-17 16:29:26.810499] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:52.905 [2024-04-17 16:29:26.810561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.905 [2024-04-17 16:29:26.810577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:52.905 [2024-04-17 16:29:26.814977] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:52.905 [2024-04-17 16:29:26.815016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.905 [2024-04-17 16:29:26.815030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:52.905 [2024-04-17 16:29:26.819911] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:52.905 [2024-04-17 16:29:26.819948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.905 [2024-04-17 16:29:26.819961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:52.905 [2024-04-17 16:29:26.824374] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:52.906 [2024-04-17 16:29:26.824411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.906 [2024-04-17 16:29:26.824424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:52.906 [2024-04-17 16:29:26.828327] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:52.906 [2024-04-17 16:29:26.828362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.906 [2024-04-17 16:29:26.828375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:52.906 [2024-04-17 16:29:26.832369] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:52.906 [2024-04-17 16:29:26.832405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.906 [2024-04-17 16:29:26.832418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:52.906 [2024-04-17 16:29:26.836165] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:52.906 [2024-04-17 16:29:26.836201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.906 [2024-04-17 16:29:26.836213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:52.906 [2024-04-17 16:29:26.840997] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:52.906 [2024-04-17 16:29:26.841033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.906 [2024-04-17 16:29:26.841046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:52.906 [2024-04-17 16:29:26.844587] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:52.906 [2024-04-17 16:29:26.844622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.906 [2024-04-17 16:29:26.844634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:52.906 [2024-04-17 16:29:26.848543] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:52.906 [2024-04-17 16:29:26.848578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.906 [2024-04-17 16:29:26.848591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:52.906 [2024-04-17 16:29:26.852241] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:52.906 [2024-04-17 16:29:26.852276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.906 [2024-04-17 16:29:26.852289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:52.906 [2024-04-17 16:29:26.857191] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:52.906 [2024-04-17 16:29:26.857227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.906 [2024-04-17 16:29:26.857241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:52.906 [2024-04-17 16:29:26.860407] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:52.906 [2024-04-17 16:29:26.860441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.906 [2024-04-17 16:29:26.860454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:52.906 [2024-04-17 16:29:26.864448] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:52.906 [2024-04-17 16:29:26.864482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.906 [2024-04-17 16:29:26.864495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:52.906 [2024-04-17 16:29:26.869150] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:52.906 [2024-04-17 16:29:26.869186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.906 [2024-04-17 16:29:26.869199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:52.906 [2024-04-17 16:29:26.873832] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:52.906 [2024-04-17 16:29:26.873868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.906 [2024-04-17 16:29:26.873881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:52.906 [2024-04-17 16:29:26.877016] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:52.906 [2024-04-17 16:29:26.877050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.906 [2024-04-17 16:29:26.877063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:52.906 [2024-04-17 16:29:26.881678] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:52.906 [2024-04-17 16:29:26.881714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.906 [2024-04-17 16:29:26.881727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:52.906 [2024-04-17 16:29:26.885298] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:52.906 [2024-04-17 16:29:26.885334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.906 [2024-04-17 16:29:26.885346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:52.906 [2024-04-17 16:29:26.890087] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:52.906 [2024-04-17 16:29:26.890121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.906 [2024-04-17 16:29:26.890134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:52.906 [2024-04-17 16:29:26.893050] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:52.906 [2024-04-17 16:29:26.893084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.906 [2024-04-17 16:29:26.893097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:52.906 [2024-04-17 16:29:26.897377] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:52.906 [2024-04-17 16:29:26.897414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.906 [2024-04-17 16:29:26.897427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:52.906 [2024-04-17 16:29:26.901303] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:52.906 [2024-04-17 16:29:26.901338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.906 [2024-04-17 16:29:26.901351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:52.906 [2024-04-17 16:29:26.904688] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:52.906 [2024-04-17 16:29:26.904722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.906 [2024-04-17 16:29:26.904735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:52.906 [2024-04-17 16:29:26.908838] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:52.906 [2024-04-17 16:29:26.908873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.906 [2024-04-17 16:29:26.908886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:52.906 [2024-04-17 16:29:26.913556] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:52.906 [2024-04-17 16:29:26.913594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.906 [2024-04-17 16:29:26.913607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:52.906 [2024-04-17 16:29:26.918508] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:52.906 [2024-04-17 16:29:26.918547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.906 [2024-04-17 16:29:26.918561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:52.906 [2024-04-17 16:29:26.922057] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:52.906 [2024-04-17 16:29:26.922104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.906 [2024-04-17 16:29:26.922118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:52.906 [2024-04-17 16:29:26.926415] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:52.906 [2024-04-17 16:29:26.926454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.906 [2024-04-17 16:29:26.926467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:52.906 [2024-04-17 16:29:26.931810] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:52.906 [2024-04-17 16:29:26.931851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.906 [2024-04-17 16:29:26.931864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:52.906 [2024-04-17 16:29:26.936878] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:52.906 [2024-04-17 16:29:26.936917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.907 [2024-04-17 16:29:26.936930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:52.907 [2024-04-17 16:29:26.940130] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:52.907 [2024-04-17 16:29:26.940166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.907 [2024-04-17 16:29:26.940179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:52.907 [2024-04-17 16:29:26.944597] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:52.907 [2024-04-17 16:29:26.944638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.907 [2024-04-17 16:29:26.944651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:52.907 [2024-04-17 16:29:26.948510] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:52.907 [2024-04-17 16:29:26.948548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.907 [2024-04-17 16:29:26.948561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.233 [2024-04-17 16:29:26.952827] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.233 [2024-04-17 16:29:26.952865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.233 [2024-04-17 16:29:26.952877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.233 [2024-04-17 16:29:26.956086] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.233 [2024-04-17 16:29:26.956122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.233 [2024-04-17 16:29:26.956135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.233 [2024-04-17 16:29:26.960496] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.233 [2024-04-17 16:29:26.960538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.233 [2024-04-17 16:29:26.960551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.233 [2024-04-17 16:29:26.965175] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.233 [2024-04-17 16:29:26.965212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.233 [2024-04-17 16:29:26.965224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.233 [2024-04-17 16:29:26.968289] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.233 [2024-04-17 16:29:26.968323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.233 [2024-04-17 16:29:26.968336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.233 [2024-04-17 16:29:26.972293] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.233 [2024-04-17 16:29:26.972328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.233 [2024-04-17 16:29:26.972341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.233 [2024-04-17 16:29:26.977057] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.233 [2024-04-17 16:29:26.977095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.233 [2024-04-17 16:29:26.977108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.233 [2024-04-17 16:29:26.981923] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.233 [2024-04-17 16:29:26.981961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.233 [2024-04-17 16:29:26.981974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.233 [2024-04-17 16:29:26.984762] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.233 [2024-04-17 16:29:26.984808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.233 [2024-04-17 16:29:26.984821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.233 [2024-04-17 16:29:26.989790] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.233 [2024-04-17 16:29:26.989829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.233 [2024-04-17 16:29:26.989842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.233 [2024-04-17 16:29:26.994432] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.233 [2024-04-17 16:29:26.994470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.233 [2024-04-17 16:29:26.994483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.233 [2024-04-17 16:29:26.998861] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.233 [2024-04-17 16:29:26.998897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.233 [2024-04-17 16:29:26.998910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.233 [2024-04-17 16:29:27.002391] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.233 [2024-04-17 16:29:27.002428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.233 [2024-04-17 16:29:27.002440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.233 [2024-04-17 16:29:27.006809] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.233 [2024-04-17 16:29:27.006846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.233 [2024-04-17 16:29:27.006859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.233 [2024-04-17 16:29:27.010299] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.233 [2024-04-17 16:29:27.010336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.233 [2024-04-17 16:29:27.010349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.233 [2024-04-17 16:29:27.013812] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.233 [2024-04-17 16:29:27.013847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.233 [2024-04-17 16:29:27.013860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.233 [2024-04-17 16:29:27.017388] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.233 [2024-04-17 16:29:27.017427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.233 [2024-04-17 16:29:27.017440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.233 [2024-04-17 16:29:27.021753] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.233 [2024-04-17 16:29:27.021817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.233 [2024-04-17 16:29:27.021831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.233 [2024-04-17 16:29:27.026807] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.233 [2024-04-17 16:29:27.026844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.233 [2024-04-17 16:29:27.026857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.233 [2024-04-17 16:29:27.031374] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.233 [2024-04-17 16:29:27.031411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.233 [2024-04-17 16:29:27.031424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.233 [2024-04-17 16:29:27.035890] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.233 [2024-04-17 16:29:27.035931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.233 [2024-04-17 16:29:27.035945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.233 [2024-04-17 16:29:27.039374] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.233 [2024-04-17 16:29:27.039406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.233 [2024-04-17 16:29:27.039419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.233 [2024-04-17 16:29:27.043386] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.233 [2024-04-17 16:29:27.043426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.234 [2024-04-17 16:29:27.043439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.234 [2024-04-17 16:29:27.047625] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.234 [2024-04-17 16:29:27.047663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.234 [2024-04-17 16:29:27.047676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.234 [2024-04-17 16:29:27.051256] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.234 [2024-04-17 16:29:27.051300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.234 [2024-04-17 16:29:27.051322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.234 [2024-04-17 16:29:27.055704] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.234 [2024-04-17 16:29:27.055745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.234 [2024-04-17 16:29:27.055759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.234 [2024-04-17 16:29:27.060423] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.234 [2024-04-17 16:29:27.060466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.234 [2024-04-17 16:29:27.060481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.234 [2024-04-17 16:29:27.063749] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.234 [2024-04-17 16:29:27.063800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.234 [2024-04-17 16:29:27.063815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.234 [2024-04-17 16:29:27.067974] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.234 [2024-04-17 16:29:27.068016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.234 [2024-04-17 16:29:27.068030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.234 [2024-04-17 16:29:27.071715] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.234 [2024-04-17 16:29:27.071756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.234 [2024-04-17 16:29:27.071783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.234 [2024-04-17 16:29:27.075711] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.234 [2024-04-17 16:29:27.075751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.234 [2024-04-17 16:29:27.075764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.234 [2024-04-17 16:29:27.080351] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.234 [2024-04-17 16:29:27.080389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.234 [2024-04-17 16:29:27.080403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.234 [2024-04-17 16:29:27.084480] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.234 [2024-04-17 16:29:27.084523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.234 [2024-04-17 16:29:27.084538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.234 [2024-04-17 16:29:27.087384] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.234 [2024-04-17 16:29:27.087425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.234 [2024-04-17 16:29:27.087439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.234 [2024-04-17 16:29:27.091956] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.234 [2024-04-17 16:29:27.091999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.234 [2024-04-17 16:29:27.092014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.234 [2024-04-17 16:29:27.096339] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.234 [2024-04-17 16:29:27.096382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.234 [2024-04-17 16:29:27.096397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.234 [2024-04-17 16:29:27.100396] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.234 [2024-04-17 16:29:27.100438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.234 [2024-04-17 16:29:27.100453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.234 [2024-04-17 16:29:27.104955] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.234 [2024-04-17 16:29:27.104996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.234 [2024-04-17 16:29:27.105010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.234 [2024-04-17 16:29:27.109122] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.234 [2024-04-17 16:29:27.109163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.234 [2024-04-17 16:29:27.109177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.234 [2024-04-17 16:29:27.113189] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.234 [2024-04-17 16:29:27.113230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.234 [2024-04-17 16:29:27.113244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.234 [2024-04-17 16:29:27.117437] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.234 [2024-04-17 16:29:27.117478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.234 [2024-04-17 16:29:27.117492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.234 [2024-04-17 16:29:27.121653] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.234 [2024-04-17 16:29:27.121695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.234 [2024-04-17 16:29:27.121709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.234 [2024-04-17 16:29:27.125506] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.234 [2024-04-17 16:29:27.125547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.234 [2024-04-17 16:29:27.125561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.234 [2024-04-17 16:29:27.129965] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.234 [2024-04-17 16:29:27.130005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.234 [2024-04-17 16:29:27.130019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.234 [2024-04-17 16:29:27.133889] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.234 [2024-04-17 16:29:27.133927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.234 [2024-04-17 16:29:27.133941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.234 [2024-04-17 16:29:27.137230] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.234 [2024-04-17 16:29:27.137269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.234 [2024-04-17 16:29:27.137283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.234 [2024-04-17 16:29:27.141980] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.234 [2024-04-17 16:29:27.142020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.234 [2024-04-17 16:29:27.142034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.234 [2024-04-17 16:29:27.145178] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.234 [2024-04-17 16:29:27.145216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.234 [2024-04-17 16:29:27.145230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.235 [2024-04-17 16:29:27.149460] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.235 [2024-04-17 16:29:27.149501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.235 [2024-04-17 16:29:27.149515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.235 [2024-04-17 16:29:27.154812] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.235 [2024-04-17 16:29:27.154852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.235 [2024-04-17 16:29:27.154866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.235 [2024-04-17 16:29:27.159865] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.235 [2024-04-17 16:29:27.159906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.235 [2024-04-17 16:29:27.159921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.235 [2024-04-17 16:29:27.162645] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.235 [2024-04-17 16:29:27.162685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.235 [2024-04-17 16:29:27.162698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.235 [2024-04-17 16:29:27.167033] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.235 [2024-04-17 16:29:27.167073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.235 [2024-04-17 16:29:27.167086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.235 [2024-04-17 16:29:27.171542] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.235 [2024-04-17 16:29:27.171582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.235 [2024-04-17 16:29:27.171596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.235 [2024-04-17 16:29:27.174942] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.235 [2024-04-17 16:29:27.174980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.235 [2024-04-17 16:29:27.174994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.235 [2024-04-17 16:29:27.179342] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.235 [2024-04-17 16:29:27.179381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.235 [2024-04-17 16:29:27.179395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.235 [2024-04-17 16:29:27.183904] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.235 [2024-04-17 16:29:27.183943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.235 [2024-04-17 16:29:27.183957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.235 [2024-04-17 16:29:27.187305] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.235 [2024-04-17 16:29:27.187344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.235 [2024-04-17 16:29:27.187359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.235 [2024-04-17 16:29:27.191817] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.235 [2024-04-17 16:29:27.191864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.235 [2024-04-17 16:29:27.191883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.235 [2024-04-17 16:29:27.196513] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.235 [2024-04-17 16:29:27.196555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.235 [2024-04-17 16:29:27.196570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.235 [2024-04-17 16:29:27.201579] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.235 [2024-04-17 16:29:27.201619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.235 [2024-04-17 16:29:27.201633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.235 [2024-04-17 16:29:27.204503] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.235 [2024-04-17 16:29:27.204542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.235 [2024-04-17 16:29:27.204555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.235 [2024-04-17 16:29:27.208512] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.235 [2024-04-17 16:29:27.208551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.235 [2024-04-17 16:29:27.208564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.235 [2024-04-17 16:29:27.212305] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.235 [2024-04-17 16:29:27.212344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.235 [2024-04-17 16:29:27.212358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.235 [2024-04-17 16:29:27.216252] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.235 [2024-04-17 16:29:27.216292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.235 [2024-04-17 16:29:27.216305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.235 [2024-04-17 16:29:27.220024] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.235 [2024-04-17 16:29:27.220063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.235 [2024-04-17 16:29:27.220077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.235 [2024-04-17 16:29:27.225174] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.235 [2024-04-17 16:29:27.225214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.235 [2024-04-17 16:29:27.225227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.235 [2024-04-17 16:29:27.229745] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.235 [2024-04-17 16:29:27.229796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.235 [2024-04-17 16:29:27.229810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.235 [2024-04-17 16:29:27.233057] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.235 [2024-04-17 16:29:27.233096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.235 [2024-04-17 16:29:27.233109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.235 [2024-04-17 16:29:27.238375] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.235 [2024-04-17 16:29:27.238417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.235 [2024-04-17 16:29:27.238431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.235 [2024-04-17 16:29:27.243173] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.235 [2024-04-17 16:29:27.243214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.235 [2024-04-17 16:29:27.243228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.235 [2024-04-17 16:29:27.246757] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.235 [2024-04-17 16:29:27.246807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.235 [2024-04-17 16:29:27.246820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.235 [2024-04-17 16:29:27.250893] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.235 [2024-04-17 16:29:27.250932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.235 [2024-04-17 16:29:27.250946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.235 [2024-04-17 16:29:27.255896] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.236 [2024-04-17 16:29:27.255940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.236 [2024-04-17 16:29:27.255953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.236 [2024-04-17 16:29:27.259611] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.236 [2024-04-17 16:29:27.259649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.236 [2024-04-17 16:29:27.259662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.495 [2024-04-17 16:29:27.263704] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.495 [2024-04-17 16:29:27.263743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.495 [2024-04-17 16:29:27.263757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.495 [2024-04-17 16:29:27.268915] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.495 [2024-04-17 16:29:27.268954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.495 [2024-04-17 16:29:27.268968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.495 [2024-04-17 16:29:27.272479] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.495 [2024-04-17 16:29:27.272517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.495 [2024-04-17 16:29:27.272530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.495 [2024-04-17 16:29:27.276889] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.495 [2024-04-17 16:29:27.276928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.495 [2024-04-17 16:29:27.276941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.495 [2024-04-17 16:29:27.282137] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.495 [2024-04-17 16:29:27.282176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.495 [2024-04-17 16:29:27.282190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.495 [2024-04-17 16:29:27.287311] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.495 [2024-04-17 16:29:27.287351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.495 [2024-04-17 16:29:27.287365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.495 [2024-04-17 16:29:27.290397] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.495 [2024-04-17 16:29:27.290435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.495 [2024-04-17 16:29:27.290449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.495 [2024-04-17 16:29:27.294750] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.495 [2024-04-17 16:29:27.294799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.495 [2024-04-17 16:29:27.294814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.495 [2024-04-17 16:29:27.299299] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.495 [2024-04-17 16:29:27.299338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.495 [2024-04-17 16:29:27.299351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.495 [2024-04-17 16:29:27.303976] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.495 [2024-04-17 16:29:27.304014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.495 [2024-04-17 16:29:27.304028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.495 [2024-04-17 16:29:27.306858] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.495 [2024-04-17 16:29:27.306895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.495 [2024-04-17 16:29:27.306908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.495 [2024-04-17 16:29:27.311223] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.495 [2024-04-17 16:29:27.311266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.495 [2024-04-17 16:29:27.311279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.495 [2024-04-17 16:29:27.315706] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.495 [2024-04-17 16:29:27.315747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.495 [2024-04-17 16:29:27.315760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.495 [2024-04-17 16:29:27.319417] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.495 [2024-04-17 16:29:27.319454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.495 [2024-04-17 16:29:27.319468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.495 [2024-04-17 16:29:27.323720] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.495 [2024-04-17 16:29:27.323760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.495 [2024-04-17 16:29:27.323787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.495 [2024-04-17 16:29:27.327167] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.495 [2024-04-17 16:29:27.327206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.495 [2024-04-17 16:29:27.327220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.495 [2024-04-17 16:29:27.331087] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.495 [2024-04-17 16:29:27.331126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.495 [2024-04-17 16:29:27.331139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.495 [2024-04-17 16:29:27.335591] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.495 [2024-04-17 16:29:27.335627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.495 [2024-04-17 16:29:27.335640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.495 [2024-04-17 16:29:27.339361] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.495 [2024-04-17 16:29:27.339398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.495 [2024-04-17 16:29:27.339411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.495 [2024-04-17 16:29:27.343137] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.495 [2024-04-17 16:29:27.343176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.495 [2024-04-17 16:29:27.343189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.495 [2024-04-17 16:29:27.347556] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.495 [2024-04-17 16:29:27.347596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.495 [2024-04-17 16:29:27.347609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.495 [2024-04-17 16:29:27.352552] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.495 [2024-04-17 16:29:27.352591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.495 [2024-04-17 16:29:27.352605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.495 [2024-04-17 16:29:27.355368] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.495 [2024-04-17 16:29:27.355405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.495 [2024-04-17 16:29:27.355418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.495 [2024-04-17 16:29:27.359817] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.495 [2024-04-17 16:29:27.359850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.495 [2024-04-17 16:29:27.359863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.495 [2024-04-17 16:29:27.364916] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.495 [2024-04-17 16:29:27.364956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.495 [2024-04-17 16:29:27.364969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.495 [2024-04-17 16:29:27.369375] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.495 [2024-04-17 16:29:27.369413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.495 [2024-04-17 16:29:27.369427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.495 [2024-04-17 16:29:27.372228] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.495 [2024-04-17 16:29:27.372265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.496 [2024-04-17 16:29:27.372279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.496 [2024-04-17 16:29:27.377555] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.496 [2024-04-17 16:29:27.377600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.496 [2024-04-17 16:29:27.377614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.496 [2024-04-17 16:29:27.382733] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.496 [2024-04-17 16:29:27.382784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.496 [2024-04-17 16:29:27.382799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.496 [2024-04-17 16:29:27.386374] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.496 [2024-04-17 16:29:27.386412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.496 [2024-04-17 16:29:27.386425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.496 [2024-04-17 16:29:27.390942] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.496 [2024-04-17 16:29:27.390980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.496 [2024-04-17 16:29:27.390993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.496 [2024-04-17 16:29:27.395951] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.496 [2024-04-17 16:29:27.395991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.496 [2024-04-17 16:29:27.396005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.496 [2024-04-17 16:29:27.399920] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.496 [2024-04-17 16:29:27.399959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.496 [2024-04-17 16:29:27.399972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.496 [2024-04-17 16:29:27.402983] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.496 [2024-04-17 16:29:27.403021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.496 [2024-04-17 16:29:27.403034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.496 [2024-04-17 16:29:27.407765] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.496 [2024-04-17 16:29:27.407816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.496 [2024-04-17 16:29:27.407829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.496 [2024-04-17 16:29:27.410998] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.496 [2024-04-17 16:29:27.411034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.496 [2024-04-17 16:29:27.411047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.496 [2024-04-17 16:29:27.415197] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.496 [2024-04-17 16:29:27.415237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.496 [2024-04-17 16:29:27.415250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.496 [2024-04-17 16:29:27.419412] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.496 [2024-04-17 16:29:27.419451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.496 [2024-04-17 16:29:27.419465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.496 [2024-04-17 16:29:27.422683] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.496 [2024-04-17 16:29:27.422721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.496 [2024-04-17 16:29:27.422735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.496 [2024-04-17 16:29:27.427089] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.496 [2024-04-17 16:29:27.427127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.496 [2024-04-17 16:29:27.427141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.496 [2024-04-17 16:29:27.431355] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.496 [2024-04-17 16:29:27.431393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.496 [2024-04-17 16:29:27.431406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.496 [2024-04-17 16:29:27.436083] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.496 [2024-04-17 16:29:27.436122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.496 [2024-04-17 16:29:27.436135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.496 [2024-04-17 16:29:27.439592] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.496 [2024-04-17 16:29:27.439630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.496 [2024-04-17 16:29:27.439643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.496 [2024-04-17 16:29:27.444007] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.496 [2024-04-17 16:29:27.444046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.496 [2024-04-17 16:29:27.444058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.496 [2024-04-17 16:29:27.448158] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.496 [2024-04-17 16:29:27.448196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.496 [2024-04-17 16:29:27.448209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.496 [2024-04-17 16:29:27.452378] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.496 [2024-04-17 16:29:27.452419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.496 [2024-04-17 16:29:27.452432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.496 [2024-04-17 16:29:27.456570] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.496 [2024-04-17 16:29:27.456609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.496 [2024-04-17 16:29:27.456622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.496 [2024-04-17 16:29:27.460505] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.496 [2024-04-17 16:29:27.460544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.496 [2024-04-17 16:29:27.460557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.496 [2024-04-17 16:29:27.465099] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.496 [2024-04-17 16:29:27.465145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.496 [2024-04-17 16:29:27.465159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.496 [2024-04-17 16:29:27.468486] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.496 [2024-04-17 16:29:27.468524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.496 [2024-04-17 16:29:27.468538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.496 [2024-04-17 16:29:27.472974] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.496 [2024-04-17 16:29:27.473013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.496 [2024-04-17 16:29:27.473027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.496 [2024-04-17 16:29:27.477999] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.496 [2024-04-17 16:29:27.478039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.496 [2024-04-17 16:29:27.478053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.496 [2024-04-17 16:29:27.482262] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.496 [2024-04-17 16:29:27.482301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.496 [2024-04-17 16:29:27.482314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.496 [2024-04-17 16:29:27.485532] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.496 [2024-04-17 16:29:27.485570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.496 [2024-04-17 16:29:27.485583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.497 [2024-04-17 16:29:27.489818] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.497 [2024-04-17 16:29:27.489856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.497 [2024-04-17 16:29:27.489869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.497 [2024-04-17 16:29:27.494713] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.497 [2024-04-17 16:29:27.494754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.497 [2024-04-17 16:29:27.494767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.497 [2024-04-17 16:29:27.499755] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.497 [2024-04-17 16:29:27.499806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.497 [2024-04-17 16:29:27.499819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.497 [2024-04-17 16:29:27.502761] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.497 [2024-04-17 16:29:27.502808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.497 [2024-04-17 16:29:27.502821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.497 [2024-04-17 16:29:27.506817] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.497 [2024-04-17 16:29:27.506855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.497 [2024-04-17 16:29:27.506868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.497 [2024-04-17 16:29:27.511433] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.497 [2024-04-17 16:29:27.511473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.497 [2024-04-17 16:29:27.511486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.497 [2024-04-17 16:29:27.514880] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.497 [2024-04-17 16:29:27.514917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.497 [2024-04-17 16:29:27.514930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.497 [2024-04-17 16:29:27.519061] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.497 [2024-04-17 16:29:27.519101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.497 [2024-04-17 16:29:27.519115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.497 [2024-04-17 16:29:27.523250] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.497 [2024-04-17 16:29:27.523290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.497 [2024-04-17 16:29:27.523303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.497 [2024-04-17 16:29:27.528101] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.497 [2024-04-17 16:29:27.528142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.497 [2024-04-17 16:29:27.528155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.497 [2024-04-17 16:29:27.531038] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.497 [2024-04-17 16:29:27.531077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.497 [2024-04-17 16:29:27.531089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.497 [2024-04-17 16:29:27.536057] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.497 [2024-04-17 16:29:27.536097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.497 [2024-04-17 16:29:27.536111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.758 [2024-04-17 16:29:27.540841] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.758 [2024-04-17 16:29:27.540887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.758 [2024-04-17 16:29:27.540900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.758 [2024-04-17 16:29:27.545863] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.758 [2024-04-17 16:29:27.545907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.758 [2024-04-17 16:29:27.545920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.758 [2024-04-17 16:29:27.548603] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.758 [2024-04-17 16:29:27.548641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.758 [2024-04-17 16:29:27.548653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.758 [2024-04-17 16:29:27.553572] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.758 [2024-04-17 16:29:27.553612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.758 [2024-04-17 16:29:27.553626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.758 [2024-04-17 16:29:27.557633] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.758 [2024-04-17 16:29:27.557673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.758 [2024-04-17 16:29:27.557686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.758 [2024-04-17 16:29:27.561004] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.758 [2024-04-17 16:29:27.561043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.758 [2024-04-17 16:29:27.561056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.758 [2024-04-17 16:29:27.565374] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.758 [2024-04-17 16:29:27.565417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.758 [2024-04-17 16:29:27.565431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.758 [2024-04-17 16:29:27.570128] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.758 [2024-04-17 16:29:27.570168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.758 [2024-04-17 16:29:27.570182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.758 [2024-04-17 16:29:27.572924] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.758 [2024-04-17 16:29:27.572963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.758 [2024-04-17 16:29:27.572977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.758 [2024-04-17 16:29:27.578184] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.758 [2024-04-17 16:29:27.578226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.758 [2024-04-17 16:29:27.578240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.758 [2024-04-17 16:29:27.581448] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.758 [2024-04-17 16:29:27.581487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.758 [2024-04-17 16:29:27.581499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.758 [2024-04-17 16:29:27.585932] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.758 [2024-04-17 16:29:27.585973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.758 [2024-04-17 16:29:27.585987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.758 [2024-04-17 16:29:27.590064] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.758 [2024-04-17 16:29:27.590112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.758 [2024-04-17 16:29:27.590126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.758 [2024-04-17 16:29:27.594820] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.758 [2024-04-17 16:29:27.594863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.758 [2024-04-17 16:29:27.594876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.758 [2024-04-17 16:29:27.598316] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.758 [2024-04-17 16:29:27.598355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.758 [2024-04-17 16:29:27.598369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.758 [2024-04-17 16:29:27.602113] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.758 [2024-04-17 16:29:27.602154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.758 [2024-04-17 16:29:27.602168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.758 [2024-04-17 16:29:27.606626] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.758 [2024-04-17 16:29:27.606666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.758 [2024-04-17 16:29:27.606680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.758 [2024-04-17 16:29:27.610602] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.758 [2024-04-17 16:29:27.610645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.758 [2024-04-17 16:29:27.610659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.758 [2024-04-17 16:29:27.614466] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.758 [2024-04-17 16:29:27.614507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.758 [2024-04-17 16:29:27.614521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.758 [2024-04-17 16:29:27.618160] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.758 [2024-04-17 16:29:27.618206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.758 [2024-04-17 16:29:27.618219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.758 [2024-04-17 16:29:27.623152] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.758 [2024-04-17 16:29:27.623192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.758 [2024-04-17 16:29:27.623205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.758 [2024-04-17 16:29:27.627822] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.758 [2024-04-17 16:29:27.627865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.758 [2024-04-17 16:29:27.627878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.758 [2024-04-17 16:29:27.630536] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.758 [2024-04-17 16:29:27.630573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.758 [2024-04-17 16:29:27.630586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.758 [2024-04-17 16:29:27.635523] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.758 [2024-04-17 16:29:27.635563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.758 [2024-04-17 16:29:27.635577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.758 [2024-04-17 16:29:27.640067] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.759 [2024-04-17 16:29:27.640106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.759 [2024-04-17 16:29:27.640119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.759 [2024-04-17 16:29:27.643314] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.759 [2024-04-17 16:29:27.643353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.759 [2024-04-17 16:29:27.643367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.759 [2024-04-17 16:29:27.647616] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.759 [2024-04-17 16:29:27.647656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.759 [2024-04-17 16:29:27.647669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.759 [2024-04-17 16:29:27.651917] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.759 [2024-04-17 16:29:27.651957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.759 [2024-04-17 16:29:27.651970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.759 [2024-04-17 16:29:27.655336] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.759 [2024-04-17 16:29:27.655375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.759 [2024-04-17 16:29:27.655389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.759 [2024-04-17 16:29:27.659137] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.759 [2024-04-17 16:29:27.659176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.759 [2024-04-17 16:29:27.659189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.759 [2024-04-17 16:29:27.663562] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.759 [2024-04-17 16:29:27.663601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.759 [2024-04-17 16:29:27.663615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.759 [2024-04-17 16:29:27.667547] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.759 [2024-04-17 16:29:27.667585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.759 [2024-04-17 16:29:27.667598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.759 [2024-04-17 16:29:27.670990] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.759 [2024-04-17 16:29:27.671027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.759 [2024-04-17 16:29:27.671040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.759 [2024-04-17 16:29:27.674910] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.759 [2024-04-17 16:29:27.674948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.759 [2024-04-17 16:29:27.674961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.759 [2024-04-17 16:29:27.678941] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.759 [2024-04-17 16:29:27.678978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.759 [2024-04-17 16:29:27.678991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.759 [2024-04-17 16:29:27.683131] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.759 [2024-04-17 16:29:27.683168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.759 [2024-04-17 16:29:27.683180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.759 [2024-04-17 16:29:27.686970] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.759 [2024-04-17 16:29:27.687008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.759 [2024-04-17 16:29:27.687020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.759 [2024-04-17 16:29:27.691060] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.759 [2024-04-17 16:29:27.691099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.759 [2024-04-17 16:29:27.691113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.759 [2024-04-17 16:29:27.694939] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.759 [2024-04-17 16:29:27.694979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.759 [2024-04-17 16:29:27.694992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.759 [2024-04-17 16:29:27.699300] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.759 [2024-04-17 16:29:27.699339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.759 [2024-04-17 16:29:27.699353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.759 [2024-04-17 16:29:27.703550] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.759 [2024-04-17 16:29:27.703589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.759 [2024-04-17 16:29:27.703602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.759 [2024-04-17 16:29:27.706942] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.759 [2024-04-17 16:29:27.706979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.759 [2024-04-17 16:29:27.706992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.759 [2024-04-17 16:29:27.711142] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.759 [2024-04-17 16:29:27.711180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.759 [2024-04-17 16:29:27.711193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.759 [2024-04-17 16:29:27.714982] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.759 [2024-04-17 16:29:27.715021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.759 [2024-04-17 16:29:27.715034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.759 [2024-04-17 16:29:27.719716] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.759 [2024-04-17 16:29:27.719756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.759 [2024-04-17 16:29:27.719781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.759 [2024-04-17 16:29:27.722992] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.759 [2024-04-17 16:29:27.723031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.759 [2024-04-17 16:29:27.723044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.759 [2024-04-17 16:29:27.726637] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.759 [2024-04-17 16:29:27.726676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.759 [2024-04-17 16:29:27.726689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.759 [2024-04-17 16:29:27.731355] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.759 [2024-04-17 16:29:27.731394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.759 [2024-04-17 16:29:27.731407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.759 [2024-04-17 16:29:27.734845] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.759 [2024-04-17 16:29:27.734883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.759 [2024-04-17 16:29:27.734896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.759 [2024-04-17 16:29:27.739172] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.759 [2024-04-17 16:29:27.739210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.759 [2024-04-17 16:29:27.739224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.759 [2024-04-17 16:29:27.743004] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.759 [2024-04-17 16:29:27.743045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.759 [2024-04-17 16:29:27.743058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.759 [2024-04-17 16:29:27.747377] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.759 [2024-04-17 16:29:27.747417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.760 [2024-04-17 16:29:27.747430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.760 [2024-04-17 16:29:27.751497] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.760 [2024-04-17 16:29:27.751535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.760 [2024-04-17 16:29:27.751548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.760 [2024-04-17 16:29:27.754811] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.760 [2024-04-17 16:29:27.754848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.760 [2024-04-17 16:29:27.754861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.760 [2024-04-17 16:29:27.759257] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.760 [2024-04-17 16:29:27.759295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.760 [2024-04-17 16:29:27.759309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.760 [2024-04-17 16:29:27.763105] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.760 [2024-04-17 16:29:27.763145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.760 [2024-04-17 16:29:27.763158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.760 [2024-04-17 16:29:27.767468] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.760 [2024-04-17 16:29:27.767507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.760 [2024-04-17 16:29:27.767520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.760 [2024-04-17 16:29:27.770397] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.760 [2024-04-17 16:29:27.770435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.760 [2024-04-17 16:29:27.770448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.760 [2024-04-17 16:29:27.774714] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.760 [2024-04-17 16:29:27.774754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.760 [2024-04-17 16:29:27.774767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.760 [2024-04-17 16:29:27.779878] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.760 [2024-04-17 16:29:27.779917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.760 [2024-04-17 16:29:27.779931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.760 [2024-04-17 16:29:27.783306] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.760 [2024-04-17 16:29:27.783344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.760 [2024-04-17 16:29:27.783356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.760 [2024-04-17 16:29:27.787826] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.760 [2024-04-17 16:29:27.787867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.760 [2024-04-17 16:29:27.787881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.760 [2024-04-17 16:29:27.792623] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.760 [2024-04-17 16:29:27.792663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.760 [2024-04-17 16:29:27.792676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.760 [2024-04-17 16:29:27.796993] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:53.760 [2024-04-17 16:29:27.797031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.760 [2024-04-17 16:29:27.797045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.020 [2024-04-17 16:29:27.800375] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.020 [2024-04-17 16:29:27.800416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.020 [2024-04-17 16:29:27.800430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.020 [2024-04-17 16:29:27.804545] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.020 [2024-04-17 16:29:27.804583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.020 [2024-04-17 16:29:27.804596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.020 [2024-04-17 16:29:27.808550] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.020 [2024-04-17 16:29:27.808588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.020 [2024-04-17 16:29:27.808601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.020 [2024-04-17 16:29:27.811755] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.020 [2024-04-17 16:29:27.811802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.020 [2024-04-17 16:29:27.811816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.020 [2024-04-17 16:29:27.815812] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.020 [2024-04-17 16:29:27.815844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.020 [2024-04-17 16:29:27.815857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.020 [2024-04-17 16:29:27.820020] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.020 [2024-04-17 16:29:27.820061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.020 [2024-04-17 16:29:27.820074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.020 [2024-04-17 16:29:27.823457] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.021 [2024-04-17 16:29:27.823496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.021 [2024-04-17 16:29:27.823510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.021 [2024-04-17 16:29:27.827533] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.021 [2024-04-17 16:29:27.827574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.021 [2024-04-17 16:29:27.827587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.021 [2024-04-17 16:29:27.831727] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.021 [2024-04-17 16:29:27.831765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.021 [2024-04-17 16:29:27.831794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.021 [2024-04-17 16:29:27.835142] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.021 [2024-04-17 16:29:27.835180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.021 [2024-04-17 16:29:27.835193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.021 [2024-04-17 16:29:27.839436] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.021 [2024-04-17 16:29:27.839476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.021 [2024-04-17 16:29:27.839489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.021 [2024-04-17 16:29:27.843218] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.021 [2024-04-17 16:29:27.843256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.021 [2024-04-17 16:29:27.843270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.021 [2024-04-17 16:29:27.847531] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.021 [2024-04-17 16:29:27.847571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.021 [2024-04-17 16:29:27.847585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.021 [2024-04-17 16:29:27.851817] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.021 [2024-04-17 16:29:27.851856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.021 [2024-04-17 16:29:27.851869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.021 [2024-04-17 16:29:27.855207] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.021 [2024-04-17 16:29:27.855246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.021 [2024-04-17 16:29:27.855259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.021 [2024-04-17 16:29:27.859436] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.021 [2024-04-17 16:29:27.859474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.021 [2024-04-17 16:29:27.859487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.021 [2024-04-17 16:29:27.863445] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.021 [2024-04-17 16:29:27.863485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.021 [2024-04-17 16:29:27.863498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.021 [2024-04-17 16:29:27.867089] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.021 [2024-04-17 16:29:27.867128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.021 [2024-04-17 16:29:27.867141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.021 [2024-04-17 16:29:27.871243] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.021 [2024-04-17 16:29:27.871282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.021 [2024-04-17 16:29:27.871295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.021 [2024-04-17 16:29:27.875640] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.021 [2024-04-17 16:29:27.875678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.021 [2024-04-17 16:29:27.875692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.021 [2024-04-17 16:29:27.879054] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.021 [2024-04-17 16:29:27.879093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.021 [2024-04-17 16:29:27.879106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.021 [2024-04-17 16:29:27.883838] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.021 [2024-04-17 16:29:27.883877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.021 [2024-04-17 16:29:27.883890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.021 [2024-04-17 16:29:27.888720] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.021 [2024-04-17 16:29:27.888759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.021 [2024-04-17 16:29:27.888785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.021 [2024-04-17 16:29:27.892289] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.021 [2024-04-17 16:29:27.892326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.021 [2024-04-17 16:29:27.892339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.021 [2024-04-17 16:29:27.896605] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.021 [2024-04-17 16:29:27.896644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.021 [2024-04-17 16:29:27.896658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.021 [2024-04-17 16:29:27.901151] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.021 [2024-04-17 16:29:27.901190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.021 [2024-04-17 16:29:27.901203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.021 [2024-04-17 16:29:27.904569] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.021 [2024-04-17 16:29:27.904611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.021 [2024-04-17 16:29:27.904624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.021 [2024-04-17 16:29:27.908895] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.021 [2024-04-17 16:29:27.908933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.021 [2024-04-17 16:29:27.908946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.021 [2024-04-17 16:29:27.913863] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.021 [2024-04-17 16:29:27.913903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.021 [2024-04-17 16:29:27.913916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.021 [2024-04-17 16:29:27.919008] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.021 [2024-04-17 16:29:27.919050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.021 [2024-04-17 16:29:27.919063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.021 [2024-04-17 16:29:27.922738] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.021 [2024-04-17 16:29:27.922785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.021 [2024-04-17 16:29:27.922811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.021 [2024-04-17 16:29:27.926125] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.021 [2024-04-17 16:29:27.926163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.021 [2024-04-17 16:29:27.926176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.021 [2024-04-17 16:29:27.930225] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.021 [2024-04-17 16:29:27.930266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.022 [2024-04-17 16:29:27.930279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.022 [2024-04-17 16:29:27.934441] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.022 [2024-04-17 16:29:27.934487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.022 [2024-04-17 16:29:27.934500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.022 [2024-04-17 16:29:27.938326] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.022 [2024-04-17 16:29:27.938366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.022 [2024-04-17 16:29:27.938379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.022 [2024-04-17 16:29:27.942306] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.022 [2024-04-17 16:29:27.942346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.022 [2024-04-17 16:29:27.942359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.022 [2024-04-17 16:29:27.945855] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.022 [2024-04-17 16:29:27.945893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.022 [2024-04-17 16:29:27.945905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.022 [2024-04-17 16:29:27.950202] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.022 [2024-04-17 16:29:27.950251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.022 [2024-04-17 16:29:27.950265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.022 [2024-04-17 16:29:27.954680] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.022 [2024-04-17 16:29:27.954720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.022 [2024-04-17 16:29:27.954733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.022 [2024-04-17 16:29:27.957788] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.022 [2024-04-17 16:29:27.957824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.022 [2024-04-17 16:29:27.957837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.022 [2024-04-17 16:29:27.962386] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.022 [2024-04-17 16:29:27.962425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.022 [2024-04-17 16:29:27.962438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.022 [2024-04-17 16:29:27.967718] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.022 [2024-04-17 16:29:27.967759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.022 [2024-04-17 16:29:27.967787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.022 [2024-04-17 16:29:27.972518] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.022 [2024-04-17 16:29:27.972558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.022 [2024-04-17 16:29:27.972571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.022 [2024-04-17 16:29:27.976902] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.022 [2024-04-17 16:29:27.976939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.022 [2024-04-17 16:29:27.976953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.022 [2024-04-17 16:29:27.980268] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.022 [2024-04-17 16:29:27.980306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.022 [2024-04-17 16:29:27.980319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.022 [2024-04-17 16:29:27.985486] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.022 [2024-04-17 16:29:27.985526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.022 [2024-04-17 16:29:27.985539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.022 [2024-04-17 16:29:27.990723] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.022 [2024-04-17 16:29:27.990765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.022 [2024-04-17 16:29:27.990792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.022 [2024-04-17 16:29:27.995722] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.022 [2024-04-17 16:29:27.995760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.022 [2024-04-17 16:29:27.995786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.022 [2024-04-17 16:29:27.998409] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.022 [2024-04-17 16:29:27.998447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.022 [2024-04-17 16:29:27.998460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.022 [2024-04-17 16:29:28.003368] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.022 [2024-04-17 16:29:28.003408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.022 [2024-04-17 16:29:28.003422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.022 [2024-04-17 16:29:28.007696] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.022 [2024-04-17 16:29:28.007735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.022 [2024-04-17 16:29:28.007748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.022 [2024-04-17 16:29:28.011026] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.022 [2024-04-17 16:29:28.011064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.022 [2024-04-17 16:29:28.011077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.022 [2024-04-17 16:29:28.015351] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.022 [2024-04-17 16:29:28.015392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.022 [2024-04-17 16:29:28.015405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.022 [2024-04-17 16:29:28.020179] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.022 [2024-04-17 16:29:28.020221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.022 [2024-04-17 16:29:28.020234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.022 [2024-04-17 16:29:28.023676] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.022 [2024-04-17 16:29:28.023717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.022 [2024-04-17 16:29:28.023730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.022 [2024-04-17 16:29:28.028201] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.022 [2024-04-17 16:29:28.028243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.022 [2024-04-17 16:29:28.028257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.022 [2024-04-17 16:29:28.032737] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.022 [2024-04-17 16:29:28.032786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.022 [2024-04-17 16:29:28.032801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.022 [2024-04-17 16:29:28.036220] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.022 [2024-04-17 16:29:28.036259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.022 [2024-04-17 16:29:28.036273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.022 [2024-04-17 16:29:28.040678] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.022 [2024-04-17 16:29:28.040718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.022 [2024-04-17 16:29:28.040732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.022 [2024-04-17 16:29:28.044977] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.022 [2024-04-17 16:29:28.045017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.023 [2024-04-17 16:29:28.045030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.023 [2024-04-17 16:29:28.048784] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.023 [2024-04-17 16:29:28.048821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.023 [2024-04-17 16:29:28.048834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.023 [2024-04-17 16:29:28.052576] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.023 [2024-04-17 16:29:28.052616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.023 [2024-04-17 16:29:28.052630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.023 [2024-04-17 16:29:28.056247] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.023 [2024-04-17 16:29:28.056288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.023 [2024-04-17 16:29:28.056301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.023 [2024-04-17 16:29:28.060172] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.023 [2024-04-17 16:29:28.060210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.023 [2024-04-17 16:29:28.060223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.282 [2024-04-17 16:29:28.064105] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.282 [2024-04-17 16:29:28.064146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.282 [2024-04-17 16:29:28.064158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.282 [2024-04-17 16:29:28.068535] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.282 [2024-04-17 16:29:28.068576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.282 [2024-04-17 16:29:28.068589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.282 [2024-04-17 16:29:28.072408] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.282 [2024-04-17 16:29:28.072450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.282 [2024-04-17 16:29:28.072463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.282 [2024-04-17 16:29:28.076537] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.282 [2024-04-17 16:29:28.076576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.282 [2024-04-17 16:29:28.076590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.282 [2024-04-17 16:29:28.080980] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.282 [2024-04-17 16:29:28.081019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.283 [2024-04-17 16:29:28.081032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.283 [2024-04-17 16:29:28.084944] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.283 [2024-04-17 16:29:28.084984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.283 [2024-04-17 16:29:28.084998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.283 [2024-04-17 16:29:28.089015] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.283 [2024-04-17 16:29:28.089054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.283 [2024-04-17 16:29:28.089067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.283 [2024-04-17 16:29:28.092684] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.283 [2024-04-17 16:29:28.092722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.283 [2024-04-17 16:29:28.092735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.283 [2024-04-17 16:29:28.096731] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.283 [2024-04-17 16:29:28.096783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.283 [2024-04-17 16:29:28.096798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.283 [2024-04-17 16:29:28.100932] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.283 [2024-04-17 16:29:28.100971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.283 [2024-04-17 16:29:28.100984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.283 [2024-04-17 16:29:28.104991] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.283 [2024-04-17 16:29:28.105031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.283 [2024-04-17 16:29:28.105044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.283 [2024-04-17 16:29:28.108752] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.283 [2024-04-17 16:29:28.108803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.283 [2024-04-17 16:29:28.108817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.283 [2024-04-17 16:29:28.112875] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.283 [2024-04-17 16:29:28.112913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.283 [2024-04-17 16:29:28.112926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.283 [2024-04-17 16:29:28.117397] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.283 [2024-04-17 16:29:28.117436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.283 [2024-04-17 16:29:28.117449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.283 [2024-04-17 16:29:28.121045] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.283 [2024-04-17 16:29:28.121083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.283 [2024-04-17 16:29:28.121096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.283 [2024-04-17 16:29:28.125162] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.283 [2024-04-17 16:29:28.125202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.283 [2024-04-17 16:29:28.125215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.283 [2024-04-17 16:29:28.129831] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.283 [2024-04-17 16:29:28.129870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.283 [2024-04-17 16:29:28.129883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.283 [2024-04-17 16:29:28.134547] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.283 [2024-04-17 16:29:28.134586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.283 [2024-04-17 16:29:28.134599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.283 [2024-04-17 16:29:28.137814] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.283 [2024-04-17 16:29:28.137851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.283 [2024-04-17 16:29:28.137864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.283 [2024-04-17 16:29:28.141872] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.283 [2024-04-17 16:29:28.141910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.283 [2024-04-17 16:29:28.141924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.283 [2024-04-17 16:29:28.146254] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.283 [2024-04-17 16:29:28.146293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.283 [2024-04-17 16:29:28.146306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.283 [2024-04-17 16:29:28.149992] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.283 [2024-04-17 16:29:28.150030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.283 [2024-04-17 16:29:28.150044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.283 [2024-04-17 16:29:28.154349] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.283 [2024-04-17 16:29:28.154389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.283 [2024-04-17 16:29:28.154402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.283 [2024-04-17 16:29:28.159173] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.283 [2024-04-17 16:29:28.159212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.283 [2024-04-17 16:29:28.159225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.283 [2024-04-17 16:29:28.162511] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.283 [2024-04-17 16:29:28.162549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.283 [2024-04-17 16:29:28.162562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.283 [2024-04-17 16:29:28.166466] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.283 [2024-04-17 16:29:28.166503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.283 [2024-04-17 16:29:28.166517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.283 [2024-04-17 16:29:28.170598] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.283 [2024-04-17 16:29:28.170638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.283 [2024-04-17 16:29:28.170651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.283 [2024-04-17 16:29:28.174235] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.283 [2024-04-17 16:29:28.174273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.283 [2024-04-17 16:29:28.174286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.283 [2024-04-17 16:29:28.178214] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.283 [2024-04-17 16:29:28.178252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.283 [2024-04-17 16:29:28.178265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.283 [2024-04-17 16:29:28.181737] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.283 [2024-04-17 16:29:28.181785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.283 [2024-04-17 16:29:28.181800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.283 [2024-04-17 16:29:28.185547] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.283 [2024-04-17 16:29:28.185585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.283 [2024-04-17 16:29:28.185599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.283 [2024-04-17 16:29:28.189438] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.283 [2024-04-17 16:29:28.189477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.283 [2024-04-17 16:29:28.189490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.284 [2024-04-17 16:29:28.193646] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.284 [2024-04-17 16:29:28.193686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.284 [2024-04-17 16:29:28.193699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.284 [2024-04-17 16:29:28.197041] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.284 [2024-04-17 16:29:28.197080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.284 [2024-04-17 16:29:28.197094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.284 [2024-04-17 16:29:28.200671] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.284 [2024-04-17 16:29:28.200710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.284 [2024-04-17 16:29:28.200723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.284 [2024-04-17 16:29:28.205070] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.284 [2024-04-17 16:29:28.205108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.284 [2024-04-17 16:29:28.205121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.284 [2024-04-17 16:29:28.209617] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.284 [2024-04-17 16:29:28.209655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.284 [2024-04-17 16:29:28.209669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.284 [2024-04-17 16:29:28.214795] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.284 [2024-04-17 16:29:28.214834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.284 [2024-04-17 16:29:28.214847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.284 [2024-04-17 16:29:28.218308] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.284 [2024-04-17 16:29:28.218347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.284 [2024-04-17 16:29:28.218360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.284 [2024-04-17 16:29:28.222229] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.284 [2024-04-17 16:29:28.222267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.284 [2024-04-17 16:29:28.222280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.284 [2024-04-17 16:29:28.227171] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.284 [2024-04-17 16:29:28.227210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.284 [2024-04-17 16:29:28.227224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.284 [2024-04-17 16:29:28.232474] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.284 [2024-04-17 16:29:28.232513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.284 [2024-04-17 16:29:28.232527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.284 [2024-04-17 16:29:28.236032] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.284 [2024-04-17 16:29:28.236070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.284 [2024-04-17 16:29:28.236084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.284 [2024-04-17 16:29:28.240396] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.284 [2024-04-17 16:29:28.240437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.284 [2024-04-17 16:29:28.240450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.284 [2024-04-17 16:29:28.245473] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.284 [2024-04-17 16:29:28.245513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.284 [2024-04-17 16:29:28.245526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.284 [2024-04-17 16:29:28.250570] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.284 [2024-04-17 16:29:28.250609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.284 [2024-04-17 16:29:28.250623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.284 [2024-04-17 16:29:28.255489] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.284 [2024-04-17 16:29:28.255527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.284 [2024-04-17 16:29:28.255540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.284 [2024-04-17 16:29:28.258179] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.284 [2024-04-17 16:29:28.258216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.284 [2024-04-17 16:29:28.258229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.284 [2024-04-17 16:29:28.262554] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.284 [2024-04-17 16:29:28.262592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.284 [2024-04-17 16:29:28.262605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.284 [2024-04-17 16:29:28.266541] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.284 [2024-04-17 16:29:28.266579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.284 [2024-04-17 16:29:28.266592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.284 [2024-04-17 16:29:28.270674] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.284 [2024-04-17 16:29:28.270712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.284 [2024-04-17 16:29:28.270725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.284 [2024-04-17 16:29:28.275137] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.284 [2024-04-17 16:29:28.275176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.284 [2024-04-17 16:29:28.275189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.284 [2024-04-17 16:29:28.278421] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.284 [2024-04-17 16:29:28.278459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.284 [2024-04-17 16:29:28.278481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.284 [2024-04-17 16:29:28.282698] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.284 [2024-04-17 16:29:28.282737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.284 [2024-04-17 16:29:28.282750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.284 [2024-04-17 16:29:28.286786] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.284 [2024-04-17 16:29:28.286824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.284 [2024-04-17 16:29:28.286837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.284 [2024-04-17 16:29:28.291038] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.284 [2024-04-17 16:29:28.291077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.284 [2024-04-17 16:29:28.291099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.284 [2024-04-17 16:29:28.295155] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.284 [2024-04-17 16:29:28.295195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.284 [2024-04-17 16:29:28.295208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.284 [2024-04-17 16:29:28.298858] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.284 [2024-04-17 16:29:28.298899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.284 [2024-04-17 16:29:28.298913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.284 [2024-04-17 16:29:28.302936] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.284 [2024-04-17 16:29:28.302974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.284 [2024-04-17 16:29:28.302987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.284 [2024-04-17 16:29:28.307553] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.285 [2024-04-17 16:29:28.307591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.285 [2024-04-17 16:29:28.307604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.285 [2024-04-17 16:29:28.311988] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.285 [2024-04-17 16:29:28.312056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.285 [2024-04-17 16:29:28.312069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.285 [2024-04-17 16:29:28.314802] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.285 [2024-04-17 16:29:28.314838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.285 [2024-04-17 16:29:28.314851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.285 [2024-04-17 16:29:28.318618] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.285 [2024-04-17 16:29:28.318658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.285 [2024-04-17 16:29:28.318670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.285 [2024-04-17 16:29:28.322315] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.285 [2024-04-17 16:29:28.322362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.285 [2024-04-17 16:29:28.322375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.545 [2024-04-17 16:29:28.326696] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.545 [2024-04-17 16:29:28.326736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.545 [2024-04-17 16:29:28.326749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.545 [2024-04-17 16:29:28.330858] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.545 [2024-04-17 16:29:28.330896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.545 [2024-04-17 16:29:28.330909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.545 [2024-04-17 16:29:28.334463] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.545 [2024-04-17 16:29:28.334502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.545 [2024-04-17 16:29:28.334515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.545 [2024-04-17 16:29:28.338537] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.545 [2024-04-17 16:29:28.338575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.545 [2024-04-17 16:29:28.338589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.545 [2024-04-17 16:29:28.342670] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.545 [2024-04-17 16:29:28.342708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.545 [2024-04-17 16:29:28.342721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.545 [2024-04-17 16:29:28.347841] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.545 [2024-04-17 16:29:28.347880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.545 [2024-04-17 16:29:28.347893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.545 [2024-04-17 16:29:28.352866] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.545 [2024-04-17 16:29:28.352904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.545 [2024-04-17 16:29:28.352918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.545 [2024-04-17 16:29:28.355626] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.545 [2024-04-17 16:29:28.355663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.545 [2024-04-17 16:29:28.355676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.545 [2024-04-17 16:29:28.360845] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.545 [2024-04-17 16:29:28.360880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.545 [2024-04-17 16:29:28.360893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.545 [2024-04-17 16:29:28.364934] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.545 [2024-04-17 16:29:28.364972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.545 [2024-04-17 16:29:28.364985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.545 [2024-04-17 16:29:28.367618] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.545 [2024-04-17 16:29:28.367655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.545 [2024-04-17 16:29:28.367667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.545 [2024-04-17 16:29:28.372581] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.545 [2024-04-17 16:29:28.372620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.545 [2024-04-17 16:29:28.372634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.545 [2024-04-17 16:29:28.376243] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.545 [2024-04-17 16:29:28.376280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.545 [2024-04-17 16:29:28.376299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.545 [2024-04-17 16:29:28.380121] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.545 [2024-04-17 16:29:28.380156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.545 [2024-04-17 16:29:28.380169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.545 [2024-04-17 16:29:28.384929] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.545 [2024-04-17 16:29:28.384968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.545 [2024-04-17 16:29:28.384981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.545 [2024-04-17 16:29:28.389553] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.545 [2024-04-17 16:29:28.389591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.545 [2024-04-17 16:29:28.389604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.545 [2024-04-17 16:29:28.392900] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.545 [2024-04-17 16:29:28.392937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.545 [2024-04-17 16:29:28.392950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.545 [2024-04-17 16:29:28.397462] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.545 [2024-04-17 16:29:28.397502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.545 [2024-04-17 16:29:28.397515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.545 [2024-04-17 16:29:28.402650] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.545 [2024-04-17 16:29:28.402689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.545 [2024-04-17 16:29:28.402703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.545 [2024-04-17 16:29:28.407264] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.545 [2024-04-17 16:29:28.407302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.546 [2024-04-17 16:29:28.407315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.546 [2024-04-17 16:29:28.409954] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.546 [2024-04-17 16:29:28.409990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.546 [2024-04-17 16:29:28.410003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.546 [2024-04-17 16:29:28.414491] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.546 [2024-04-17 16:29:28.414529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.546 [2024-04-17 16:29:28.414543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.546 [2024-04-17 16:29:28.418850] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.546 [2024-04-17 16:29:28.418888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.546 [2024-04-17 16:29:28.418901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.546 [2024-04-17 16:29:28.422037] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.546 [2024-04-17 16:29:28.422074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.546 [2024-04-17 16:29:28.422097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.546 [2024-04-17 16:29:28.426948] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.546 [2024-04-17 16:29:28.426985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.546 [2024-04-17 16:29:28.426998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.546 [2024-04-17 16:29:28.432044] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.546 [2024-04-17 16:29:28.432083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.546 [2024-04-17 16:29:28.432097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.546 [2024-04-17 16:29:28.436699] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.546 [2024-04-17 16:29:28.436737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.546 [2024-04-17 16:29:28.436751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.546 [2024-04-17 16:29:28.439416] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.546 [2024-04-17 16:29:28.439452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.546 [2024-04-17 16:29:28.439465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.546 [2024-04-17 16:29:28.444043] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.546 [2024-04-17 16:29:28.444082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.546 [2024-04-17 16:29:28.444095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.546 [2024-04-17 16:29:28.447808] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.546 [2024-04-17 16:29:28.447845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.546 [2024-04-17 16:29:28.447858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.546 [2024-04-17 16:29:28.451307] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.546 [2024-04-17 16:29:28.451345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.546 [2024-04-17 16:29:28.451358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.546 [2024-04-17 16:29:28.454765] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.546 [2024-04-17 16:29:28.454815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.546 [2024-04-17 16:29:28.454829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.546 [2024-04-17 16:29:28.459301] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.546 [2024-04-17 16:29:28.459341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.546 [2024-04-17 16:29:28.459354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.546 [2024-04-17 16:29:28.462756] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.546 [2024-04-17 16:29:28.462806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.546 [2024-04-17 16:29:28.462820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.546 [2024-04-17 16:29:28.466381] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.546 [2024-04-17 16:29:28.466419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.546 [2024-04-17 16:29:28.466432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.546 [2024-04-17 16:29:28.470475] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.546 [2024-04-17 16:29:28.470514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.546 [2024-04-17 16:29:28.470527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.546 [2024-04-17 16:29:28.474931] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.546 [2024-04-17 16:29:28.474969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.546 [2024-04-17 16:29:28.474983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.546 [2024-04-17 16:29:28.478614] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.546 [2024-04-17 16:29:28.478653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.546 [2024-04-17 16:29:28.478666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.546 [2024-04-17 16:29:28.482676] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.546 [2024-04-17 16:29:28.482729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.546 [2024-04-17 16:29:28.482742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.546 [2024-04-17 16:29:28.486944] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.546 [2024-04-17 16:29:28.486993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.546 [2024-04-17 16:29:28.487006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.546 [2024-04-17 16:29:28.491011] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.546 [2024-04-17 16:29:28.491050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.546 [2024-04-17 16:29:28.491063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.546 [2024-04-17 16:29:28.495317] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.546 [2024-04-17 16:29:28.495356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.546 [2024-04-17 16:29:28.495370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.546 [2024-04-17 16:29:28.499448] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.546 [2024-04-17 16:29:28.499488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.546 [2024-04-17 16:29:28.499501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.546 [2024-04-17 16:29:28.503095] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.546 [2024-04-17 16:29:28.503134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.546 [2024-04-17 16:29:28.503148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.546 [2024-04-17 16:29:28.507286] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.546 [2024-04-17 16:29:28.507326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.546 [2024-04-17 16:29:28.507339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.546 [2024-04-17 16:29:28.511513] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.546 [2024-04-17 16:29:28.511552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.546 [2024-04-17 16:29:28.511565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.546 [2024-04-17 16:29:28.515639] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.546 [2024-04-17 16:29:28.515693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.546 [2024-04-17 16:29:28.515722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.547 [2024-04-17 16:29:28.519877] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.547 [2024-04-17 16:29:28.519917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.547 [2024-04-17 16:29:28.519929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.547 [2024-04-17 16:29:28.523897] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.547 [2024-04-17 16:29:28.523935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.547 [2024-04-17 16:29:28.523948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.547 [2024-04-17 16:29:28.528617] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.547 [2024-04-17 16:29:28.528665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.547 [2024-04-17 16:29:28.528694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.547 [2024-04-17 16:29:28.532619] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.547 [2024-04-17 16:29:28.532657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.547 [2024-04-17 16:29:28.532670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.547 [2024-04-17 16:29:28.536639] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.547 [2024-04-17 16:29:28.536677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.547 [2024-04-17 16:29:28.536690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.547 [2024-04-17 16:29:28.539969] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.547 [2024-04-17 16:29:28.540007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.547 [2024-04-17 16:29:28.540020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.547 [2024-04-17 16:29:28.544038] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.547 [2024-04-17 16:29:28.544076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.547 [2024-04-17 16:29:28.544089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.547 [2024-04-17 16:29:28.549137] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.547 [2024-04-17 16:29:28.549188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.547 [2024-04-17 16:29:28.549201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.547 [2024-04-17 16:29:28.552382] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.547 [2024-04-17 16:29:28.552437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.547 [2024-04-17 16:29:28.552449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.547 [2024-04-17 16:29:28.556851] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.547 [2024-04-17 16:29:28.556888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.547 [2024-04-17 16:29:28.556902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.547 [2024-04-17 16:29:28.561542] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.547 [2024-04-17 16:29:28.561581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.547 [2024-04-17 16:29:28.561593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.547 [2024-04-17 16:29:28.565000] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.547 [2024-04-17 16:29:28.565037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.547 [2024-04-17 16:29:28.565050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.547 [2024-04-17 16:29:28.569338] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.547 [2024-04-17 16:29:28.569387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.547 [2024-04-17 16:29:28.569400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.547 [2024-04-17 16:29:28.573001] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.547 [2024-04-17 16:29:28.573053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.547 [2024-04-17 16:29:28.573067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.547 [2024-04-17 16:29:28.578087] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.547 [2024-04-17 16:29:28.578139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.547 [2024-04-17 16:29:28.578156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.547 [2024-04-17 16:29:28.583280] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.547 [2024-04-17 16:29:28.583319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.547 [2024-04-17 16:29:28.583332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.547 [2024-04-17 16:29:28.586747] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.547 [2024-04-17 16:29:28.586799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.547 [2024-04-17 16:29:28.586812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.808 [2024-04-17 16:29:28.591485] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.808 [2024-04-17 16:29:28.591523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.808 [2024-04-17 16:29:28.591536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.808 [2024-04-17 16:29:28.596406] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.808 [2024-04-17 16:29:28.596443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.808 [2024-04-17 16:29:28.596457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.808 [2024-04-17 16:29:28.600069] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.808 [2024-04-17 16:29:28.600122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.808 [2024-04-17 16:29:28.600135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.808 [2024-04-17 16:29:28.604626] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.808 [2024-04-17 16:29:28.604679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.808 [2024-04-17 16:29:28.604708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.808 [2024-04-17 16:29:28.609385] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.808 [2024-04-17 16:29:28.609424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.808 [2024-04-17 16:29:28.609437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.808 [2024-04-17 16:29:28.612516] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.808 [2024-04-17 16:29:28.612554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.808 [2024-04-17 16:29:28.612567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.808 [2024-04-17 16:29:28.616323] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.808 [2024-04-17 16:29:28.616361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.808 [2024-04-17 16:29:28.616374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.808 [2024-04-17 16:29:28.620798] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.808 [2024-04-17 16:29:28.620834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.808 [2024-04-17 16:29:28.620848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.808 [2024-04-17 16:29:28.625200] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.808 [2024-04-17 16:29:28.625238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.808 [2024-04-17 16:29:28.625251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.808 [2024-04-17 16:29:28.628244] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.808 [2024-04-17 16:29:28.628282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.808 [2024-04-17 16:29:28.628295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.808 [2024-04-17 16:29:28.633163] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.808 [2024-04-17 16:29:28.633196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.808 [2024-04-17 16:29:28.633208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.808 [2024-04-17 16:29:28.637506] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.808 [2024-04-17 16:29:28.637543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.808 [2024-04-17 16:29:28.637574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.808 [2024-04-17 16:29:28.640854] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.808 [2024-04-17 16:29:28.640891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.808 [2024-04-17 16:29:28.640905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.808 [2024-04-17 16:29:28.645900] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.808 [2024-04-17 16:29:28.645938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.808 [2024-04-17 16:29:28.645952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.808 [2024-04-17 16:29:28.651078] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.808 [2024-04-17 16:29:28.651117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.808 [2024-04-17 16:29:28.651130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.808 [2024-04-17 16:29:28.654573] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.808 [2024-04-17 16:29:28.654623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.808 [2024-04-17 16:29:28.654647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.808 [2024-04-17 16:29:28.659247] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.808 [2024-04-17 16:29:28.659302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.808 [2024-04-17 16:29:28.659315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.808 [2024-04-17 16:29:28.663533] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.808 [2024-04-17 16:29:28.663588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.808 [2024-04-17 16:29:28.663600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.808 [2024-04-17 16:29:28.667692] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.808 [2024-04-17 16:29:28.667731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.808 [2024-04-17 16:29:28.667744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.808 [2024-04-17 16:29:28.672019] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.808 [2024-04-17 16:29:28.672056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.808 [2024-04-17 16:29:28.672069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.808 [2024-04-17 16:29:28.676305] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.808 [2024-04-17 16:29:28.676344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.808 [2024-04-17 16:29:28.676357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.808 [2024-04-17 16:29:28.680028] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.808 [2024-04-17 16:29:28.680066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.808 [2024-04-17 16:29:28.680079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.808 [2024-04-17 16:29:28.684193] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.808 [2024-04-17 16:29:28.684232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.808 [2024-04-17 16:29:28.684246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.809 [2024-04-17 16:29:28.687727] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.809 [2024-04-17 16:29:28.687806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.809 [2024-04-17 16:29:28.687821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.809 [2024-04-17 16:29:28.692192] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.809 [2024-04-17 16:29:28.692246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.809 [2024-04-17 16:29:28.692259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.809 [2024-04-17 16:29:28.696827] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.809 [2024-04-17 16:29:28.696881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.809 [2024-04-17 16:29:28.696895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.809 [2024-04-17 16:29:28.700301] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.809 [2024-04-17 16:29:28.700337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.809 [2024-04-17 16:29:28.700350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.809 [2024-04-17 16:29:28.704824] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.809 [2024-04-17 16:29:28.704863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.809 [2024-04-17 16:29:28.704875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.809 [2024-04-17 16:29:28.709614] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.809 [2024-04-17 16:29:28.709653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.809 [2024-04-17 16:29:28.709683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.809 [2024-04-17 16:29:28.712554] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.809 [2024-04-17 16:29:28.712591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.809 [2024-04-17 16:29:28.712604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.809 [2024-04-17 16:29:28.717872] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.809 [2024-04-17 16:29:28.717911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.809 [2024-04-17 16:29:28.717924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.809 [2024-04-17 16:29:28.722599] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.809 [2024-04-17 16:29:28.722638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.809 [2024-04-17 16:29:28.722651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.809 [2024-04-17 16:29:28.725940] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.809 [2024-04-17 16:29:28.725977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.809 [2024-04-17 16:29:28.725990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.809 [2024-04-17 16:29:28.729771] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.809 [2024-04-17 16:29:28.729818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.809 [2024-04-17 16:29:28.729848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.809 [2024-04-17 16:29:28.733946] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.809 [2024-04-17 16:29:28.733984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.809 [2024-04-17 16:29:28.733997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.809 [2024-04-17 16:29:28.737503] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.809 [2024-04-17 16:29:28.737541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.809 [2024-04-17 16:29:28.737554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.809 [2024-04-17 16:29:28.741878] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.809 [2024-04-17 16:29:28.741916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.809 [2024-04-17 16:29:28.741929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.809 [2024-04-17 16:29:28.745674] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.809 [2024-04-17 16:29:28.745728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.809 [2024-04-17 16:29:28.745741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.809 [2024-04-17 16:29:28.749466] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.809 [2024-04-17 16:29:28.749505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.809 [2024-04-17 16:29:28.749518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.809 [2024-04-17 16:29:28.753755] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.809 [2024-04-17 16:29:28.753804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.809 [2024-04-17 16:29:28.753818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.809 [2024-04-17 16:29:28.758626] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.809 [2024-04-17 16:29:28.758664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.809 [2024-04-17 16:29:28.758677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.809 [2024-04-17 16:29:28.763214] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.809 [2024-04-17 16:29:28.763252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.809 [2024-04-17 16:29:28.763265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.809 [2024-04-17 16:29:28.767597] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.809 [2024-04-17 16:29:28.767651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.809 [2024-04-17 16:29:28.767672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.809 [2024-04-17 16:29:28.770427] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.809 [2024-04-17 16:29:28.770464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.809 [2024-04-17 16:29:28.770476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.809 [2024-04-17 16:29:28.774918] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.809 [2024-04-17 16:29:28.774955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.809 [2024-04-17 16:29:28.774968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.809 [2024-04-17 16:29:28.780005] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.809 [2024-04-17 16:29:28.780044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.809 [2024-04-17 16:29:28.780057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.809 [2024-04-17 16:29:28.783190] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.809 [2024-04-17 16:29:28.783243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.809 [2024-04-17 16:29:28.783256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.809 [2024-04-17 16:29:28.787209] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.809 [2024-04-17 16:29:28.787247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.809 [2024-04-17 16:29:28.787260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.809 [2024-04-17 16:29:28.792030] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.809 [2024-04-17 16:29:28.792083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.809 [2024-04-17 16:29:28.792097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.809 [2024-04-17 16:29:28.795496] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa347b0) 00:17:54.809 [2024-04-17 16:29:28.795565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.809 [2024-04-17 16:29:28.795595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.809 00:17:54.809 Latency(us) 00:17:54.810 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.810 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:54.810 nvme0n1 : 2.00 7457.88 932.24 0.00 0.00 2141.88 599.51 11915.64 00:17:54.810 =================================================================================================================== 00:17:54.810 Total : 7457.88 932.24 0.00 0.00 2141.88 599.51 11915.64 00:17:54.810 0 00:17:54.810 16:29:28 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:54.810 16:29:28 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:54.810 16:29:28 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:54.810 16:29:28 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:54.810 | .driver_specific 00:17:54.810 | .nvme_error 00:17:54.810 | .status_code 00:17:54.810 | .command_transient_transport_error' 00:17:55.068 16:29:29 -- host/digest.sh@71 -- # (( 481 > 0 )) 00:17:55.068 16:29:29 -- host/digest.sh@73 -- # killprocess 85908 00:17:55.068 16:29:29 -- common/autotest_common.sh@936 -- # '[' -z 85908 ']' 00:17:55.068 16:29:29 -- common/autotest_common.sh@940 -- # kill -0 85908 00:17:55.068 16:29:29 -- common/autotest_common.sh@941 -- # uname 00:17:55.068 16:29:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:55.068 16:29:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85908 00:17:55.326 16:29:29 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:55.326 killing process with pid 85908 00:17:55.326 16:29:29 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:55.326 16:29:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85908' 00:17:55.326 Received shutdown signal, test time was about 2.000000 seconds 00:17:55.326 00:17:55.326 Latency(us) 00:17:55.326 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.326 =================================================================================================================== 00:17:55.326 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:55.326 16:29:29 -- common/autotest_common.sh@955 -- # kill 85908 00:17:55.326 16:29:29 -- common/autotest_common.sh@960 -- # wait 85908 00:17:55.584 16:29:29 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:17:55.584 16:29:29 -- host/digest.sh@54 -- # local rw bs qd 00:17:55.584 16:29:29 -- host/digest.sh@56 -- # rw=randwrite 00:17:55.584 16:29:29 -- host/digest.sh@56 -- # bs=4096 00:17:55.584 16:29:29 -- host/digest.sh@56 -- # qd=128 00:17:55.584 16:29:29 -- host/digest.sh@58 -- # bperfpid=85994 00:17:55.584 16:29:29 -- host/digest.sh@60 -- # waitforlisten 85994 /var/tmp/bperf.sock 00:17:55.584 16:29:29 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:17:55.584 16:29:29 -- common/autotest_common.sh@817 -- # '[' -z 85994 ']' 00:17:55.584 16:29:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:55.584 16:29:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:55.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:55.584 16:29:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:55.584 16:29:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:55.584 16:29:29 -- common/autotest_common.sh@10 -- # set +x 00:17:55.584 [2024-04-17 16:29:29.440503] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:17:55.584 [2024-04-17 16:29:29.440609] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85994 ] 00:17:55.584 [2024-04-17 16:29:29.580368] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.842 [2024-04-17 16:29:29.688862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:56.780 16:29:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:56.780 16:29:30 -- common/autotest_common.sh@850 -- # return 0 00:17:56.780 16:29:30 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:56.780 16:29:30 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:56.780 16:29:30 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:56.780 16:29:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:56.780 16:29:30 -- common/autotest_common.sh@10 -- # set +x 00:17:56.780 16:29:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:56.780 16:29:30 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:56.780 16:29:30 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:57.038 nvme0n1 00:17:57.038 16:29:31 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:57.038 16:29:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:57.038 16:29:31 -- common/autotest_common.sh@10 -- # set +x 00:17:57.038 16:29:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:57.038 16:29:31 -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:57.038 16:29:31 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:57.297 Running I/O for 2 seconds... 00:17:57.297 [2024-04-17 16:29:31.189213] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f6458 00:17:57.297 [2024-04-17 16:29:31.189977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.297 [2024-04-17 16:29:31.190015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:57.297 [2024-04-17 16:29:31.203277] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f9f68 00:17:57.297 [2024-04-17 16:29:31.204185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.297 [2024-04-17 16:29:31.204236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:57.297 [2024-04-17 16:29:31.214648] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190e9168 00:17:57.297 [2024-04-17 16:29:31.215487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.297 [2024-04-17 16:29:31.215535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:57.297 [2024-04-17 16:29:31.225855] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f7538 00:17:57.297 [2024-04-17 16:29:31.226433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.297 [2024-04-17 16:29:31.226468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:57.297 [2024-04-17 16:29:31.239338] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190eea00 00:17:57.297 [2024-04-17 16:29:31.240808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.297 [2024-04-17 16:29:31.240844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:57.297 [2024-04-17 16:29:31.251480] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190efae0 00:17:57.297 [2024-04-17 16:29:31.253039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.297 [2024-04-17 16:29:31.253073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:57.297 [2024-04-17 16:29:31.262408] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f6890 00:17:57.297 [2024-04-17 16:29:31.263519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.297 [2024-04-17 16:29:31.263555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:57.297 [2024-04-17 16:29:31.274259] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190e27f0 00:17:57.297 [2024-04-17 16:29:31.275377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.297 [2024-04-17 16:29:31.275427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:57.297 [2024-04-17 16:29:31.285917] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190e84c0 00:17:57.297 [2024-04-17 16:29:31.287179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.297 [2024-04-17 16:29:31.287212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:57.297 [2024-04-17 16:29:31.300093] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190fa7d8 00:17:57.297 [2024-04-17 16:29:31.302028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.297 [2024-04-17 16:29:31.302062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:57.297 [2024-04-17 16:29:31.308530] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190e0a68 00:17:57.297 [2024-04-17 16:29:31.309480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.297 [2024-04-17 16:29:31.309540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:57.297 [2024-04-17 16:29:31.322741] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190e8d30 00:17:57.297 [2024-04-17 16:29:31.324382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.297 [2024-04-17 16:29:31.324430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:57.297 [2024-04-17 16:29:31.333836] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f2d80 00:17:57.297 [2024-04-17 16:29:31.335016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.297 [2024-04-17 16:29:31.335051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:57.556 [2024-04-17 16:29:31.345343] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f4f40 00:17:57.556 [2024-04-17 16:29:31.346659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.556 [2024-04-17 16:29:31.346693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:57.556 [2024-04-17 16:29:31.357169] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f8a50 00:17:57.556 [2024-04-17 16:29:31.357994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.556 [2024-04-17 16:29:31.358029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:57.556 [2024-04-17 16:29:31.369627] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190e2c28 00:17:57.556 [2024-04-17 16:29:31.370632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.556 [2024-04-17 16:29:31.370667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:57.556 [2024-04-17 16:29:31.381152] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190de8a8 00:17:57.556 [2024-04-17 16:29:31.382352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.556 [2024-04-17 16:29:31.382386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:57.556 [2024-04-17 16:29:31.392578] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190fef90 00:17:57.556 [2024-04-17 16:29:31.393939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.556 [2024-04-17 16:29:31.393972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.556 [2024-04-17 16:29:31.403517] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f7538 00:17:57.556 [2024-04-17 16:29:31.404408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.556 [2024-04-17 16:29:31.404443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.556 [2024-04-17 16:29:31.415039] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190e3498 00:17:57.556 [2024-04-17 16:29:31.416096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.556 [2024-04-17 16:29:31.416130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:57.556 [2024-04-17 16:29:31.428422] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f0ff8 00:17:57.556 [2024-04-17 16:29:31.429790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.556 [2024-04-17 16:29:31.429823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:57.556 [2024-04-17 16:29:31.439500] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190fe720 00:17:57.556 [2024-04-17 16:29:31.440754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.556 [2024-04-17 16:29:31.440793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:57.556 [2024-04-17 16:29:31.451003] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190efae0 00:17:57.556 [2024-04-17 16:29:31.451700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.556 [2024-04-17 16:29:31.451734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:57.556 [2024-04-17 16:29:31.462309] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f8618 00:17:57.556 [2024-04-17 16:29:31.462935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.556 [2024-04-17 16:29:31.462976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:57.556 [2024-04-17 16:29:31.475946] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f3a28 00:17:57.556 [2024-04-17 16:29:31.477362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.557 [2024-04-17 16:29:31.477400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:57.557 [2024-04-17 16:29:31.486845] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190e23b8 00:17:57.557 [2024-04-17 16:29:31.488075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.557 [2024-04-17 16:29:31.488111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:57.557 [2024-04-17 16:29:31.498085] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190e2c28 00:17:57.557 [2024-04-17 16:29:31.499165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.557 [2024-04-17 16:29:31.499203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:57.557 [2024-04-17 16:29:31.509280] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190eaab8 00:17:57.557 [2024-04-17 16:29:31.510184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.557 [2024-04-17 16:29:31.510219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:57.557 [2024-04-17 16:29:31.520551] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190ed0b0 00:17:57.557 [2024-04-17 16:29:31.521330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:11489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.557 [2024-04-17 16:29:31.521363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:57.557 [2024-04-17 16:29:31.535047] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190eff18 00:17:57.557 [2024-04-17 16:29:31.536436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:3687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.557 [2024-04-17 16:29:31.536470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:57.557 [2024-04-17 16:29:31.546289] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f46d0 00:17:57.557 [2024-04-17 16:29:31.547519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.557 [2024-04-17 16:29:31.547552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:57.557 [2024-04-17 16:29:31.558107] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190de038 00:17:57.557 [2024-04-17 16:29:31.559497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.557 [2024-04-17 16:29:31.559544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:57.557 [2024-04-17 16:29:31.569965] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f0ff8 00:17:57.557 [2024-04-17 16:29:31.570892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.557 [2024-04-17 16:29:31.570940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:57.557 [2024-04-17 16:29:31.581252] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f2948 00:17:57.557 [2024-04-17 16:29:31.582056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.557 [2024-04-17 16:29:31.582101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:57.557 [2024-04-17 16:29:31.592574] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190fcdd0 00:17:57.557 [2024-04-17 16:29:31.593582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.557 [2024-04-17 16:29:31.593626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:57.816 [2024-04-17 16:29:31.606700] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190e3d08 00:17:57.816 [2024-04-17 16:29:31.608544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.816 [2024-04-17 16:29:31.608597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:57.816 [2024-04-17 16:29:31.619575] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190ef6a8 00:17:57.816 [2024-04-17 16:29:31.621508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.816 [2024-04-17 16:29:31.621559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:57.816 [2024-04-17 16:29:31.628219] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f2d80 00:17:57.816 [2024-04-17 16:29:31.629122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.816 [2024-04-17 16:29:31.629157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:57.816 [2024-04-17 16:29:31.642651] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f6020 00:17:57.816 [2024-04-17 16:29:31.644357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.816 [2024-04-17 16:29:31.644406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:57.816 [2024-04-17 16:29:31.654861] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190ed0b0 00:17:57.816 [2024-04-17 16:29:31.656496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.816 [2024-04-17 16:29:31.656544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:57.816 [2024-04-17 16:29:31.666254] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f7538 00:17:57.816 [2024-04-17 16:29:31.667672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.816 [2024-04-17 16:29:31.667719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:57.816 [2024-04-17 16:29:31.677934] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190e4578 00:17:57.816 [2024-04-17 16:29:31.679396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.816 [2024-04-17 16:29:31.679443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:57.816 [2024-04-17 16:29:31.689140] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f1430 00:17:57.816 [2024-04-17 16:29:31.690170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.817 [2024-04-17 16:29:31.690203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:57.817 [2024-04-17 16:29:31.700638] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190e1f80 00:17:57.817 [2024-04-17 16:29:31.701438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.817 [2024-04-17 16:29:31.701471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:57.817 [2024-04-17 16:29:31.712824] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190e84c0 00:17:57.817 [2024-04-17 16:29:31.713995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.817 [2024-04-17 16:29:31.714029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:57.817 [2024-04-17 16:29:31.725019] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190df118 00:17:57.817 [2024-04-17 16:29:31.726126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.817 [2024-04-17 16:29:31.726161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:57.817 [2024-04-17 16:29:31.736814] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f0ff8 00:17:57.817 [2024-04-17 16:29:31.738007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.817 [2024-04-17 16:29:31.738049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:57.817 [2024-04-17 16:29:31.750516] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190e4de8 00:17:57.817 [2024-04-17 16:29:31.752030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.817 [2024-04-17 16:29:31.752069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:57.817 [2024-04-17 16:29:31.762012] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190de8a8 00:17:57.817 [2024-04-17 16:29:31.763342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.817 [2024-04-17 16:29:31.763378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:57.817 [2024-04-17 16:29:31.773876] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190eb328 00:17:57.817 [2024-04-17 16:29:31.774667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.817 [2024-04-17 16:29:31.774704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:57.817 [2024-04-17 16:29:31.785697] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f35f0 00:17:57.817 [2024-04-17 16:29:31.786689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.817 [2024-04-17 16:29:31.786723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:57.817 [2024-04-17 16:29:31.797160] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190fb8b8 00:17:57.817 [2024-04-17 16:29:31.798359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.817 [2024-04-17 16:29:31.798395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:57.817 [2024-04-17 16:29:31.809532] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190e7c50 00:17:57.817 [2024-04-17 16:29:31.810669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.817 [2024-04-17 16:29:31.810729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:57.817 [2024-04-17 16:29:31.821210] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190e01f8 00:17:57.817 [2024-04-17 16:29:31.822193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.817 [2024-04-17 16:29:31.822228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:57.817 [2024-04-17 16:29:31.832581] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190e95a0 00:17:57.817 [2024-04-17 16:29:31.833452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:8937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.817 [2024-04-17 16:29:31.833486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:57.817 [2024-04-17 16:29:31.846301] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190de8a8 00:17:57.817 [2024-04-17 16:29:31.847631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.817 [2024-04-17 16:29:31.847667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:57.817 [2024-04-17 16:29:31.857913] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190df550 00:17:57.817 [2024-04-17 16:29:31.859229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:57.817 [2024-04-17 16:29:31.859263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:58.075 [2024-04-17 16:29:31.869871] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190fac10 00:17:58.075 [2024-04-17 16:29:31.871182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.075 [2024-04-17 16:29:31.871217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:58.075 [2024-04-17 16:29:31.881150] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190e99d8 00:17:58.075 [2024-04-17 16:29:31.882333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.075 [2024-04-17 16:29:31.882368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:58.075 [2024-04-17 16:29:31.892763] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190ebb98 00:17:58.075 [2024-04-17 16:29:31.893910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.075 [2024-04-17 16:29:31.893944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:58.075 [2024-04-17 16:29:31.906129] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190fe2e8 00:17:58.076 [2024-04-17 16:29:31.907612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.076 [2024-04-17 16:29:31.907648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:58.076 [2024-04-17 16:29:31.917338] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190fd208 00:17:58.076 [2024-04-17 16:29:31.918702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:6834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.076 [2024-04-17 16:29:31.918736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:58.076 [2024-04-17 16:29:31.931208] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190ecc78 00:17:58.076 [2024-04-17 16:29:31.933208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.076 [2024-04-17 16:29:31.933244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:58.076 [2024-04-17 16:29:31.939654] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190fb480 00:17:58.076 [2024-04-17 16:29:31.940507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.076 [2024-04-17 16:29:31.940541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:58.076 [2024-04-17 16:29:31.954564] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190eb328 00:17:58.076 [2024-04-17 16:29:31.956404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.076 [2024-04-17 16:29:31.956437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:58.076 [2024-04-17 16:29:31.965834] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f2510 00:17:58.076 [2024-04-17 16:29:31.967536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.076 [2024-04-17 16:29:31.967572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:58.076 [2024-04-17 16:29:31.975371] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f5be8 00:17:58.076 [2024-04-17 16:29:31.976373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.076 [2024-04-17 16:29:31.976407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:58.076 [2024-04-17 16:29:31.988716] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190eff18 00:17:58.076 [2024-04-17 16:29:31.990229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.076 [2024-04-17 16:29:31.990265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:58.076 [2024-04-17 16:29:31.999683] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190fc128 00:17:58.076 [2024-04-17 16:29:32.000763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.076 [2024-04-17 16:29:32.000809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:58.076 [2024-04-17 16:29:32.011368] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190e6b70 00:17:58.076 [2024-04-17 16:29:32.012442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.076 [2024-04-17 16:29:32.012478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:58.076 [2024-04-17 16:29:32.023031] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190fef90 00:17:58.076 [2024-04-17 16:29:32.024117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.076 [2024-04-17 16:29:32.024153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:58.076 [2024-04-17 16:29:32.034515] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190e23b8 00:17:58.076 [2024-04-17 16:29:32.035691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.076 [2024-04-17 16:29:32.035724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:58.076 [2024-04-17 16:29:32.048571] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190df118 00:17:58.076 [2024-04-17 16:29:32.050424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.076 [2024-04-17 16:29:32.050458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:58.076 [2024-04-17 16:29:32.056929] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190e4578 00:17:58.076 [2024-04-17 16:29:32.057794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.076 [2024-04-17 16:29:32.057827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:58.076 [2024-04-17 16:29:32.068875] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190ed0b0 00:17:58.076 [2024-04-17 16:29:32.069737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.076 [2024-04-17 16:29:32.069785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:58.076 [2024-04-17 16:29:32.082284] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190e0a68 00:17:58.076 [2024-04-17 16:29:32.083660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.076 [2024-04-17 16:29:32.083694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:58.076 [2024-04-17 16:29:32.093227] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190ec408 00:17:58.076 [2024-04-17 16:29:32.094188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.076 [2024-04-17 16:29:32.094223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:58.076 [2024-04-17 16:29:32.104854] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190e88f8 00:17:58.076 [2024-04-17 16:29:32.105975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.076 [2024-04-17 16:29:32.106010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:58.076 [2024-04-17 16:29:32.116999] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f6890 00:17:58.076 [2024-04-17 16:29:32.118107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.076 [2024-04-17 16:29:32.118144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:58.334 [2024-04-17 16:29:32.129429] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f81e0 00:17:58.334 [2024-04-17 16:29:32.130623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.334 [2024-04-17 16:29:32.130661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:58.334 [2024-04-17 16:29:32.141179] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190e84c0 00:17:58.334 [2024-04-17 16:29:32.142467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.334 [2024-04-17 16:29:32.142506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:58.334 [2024-04-17 16:29:32.154665] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190feb58 00:17:58.334 [2024-04-17 16:29:32.156294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:11641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.334 [2024-04-17 16:29:32.156328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:58.334 [2024-04-17 16:29:32.165742] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f1430 00:17:58.334 [2024-04-17 16:29:32.167229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.334 [2024-04-17 16:29:32.167264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:58.334 [2024-04-17 16:29:32.177274] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f9b30 00:17:58.334 [2024-04-17 16:29:32.178213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.334 [2024-04-17 16:29:32.178247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:58.334 [2024-04-17 16:29:32.189035] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190df118 00:17:58.334 [2024-04-17 16:29:32.190310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.334 [2024-04-17 16:29:32.190345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:58.335 [2024-04-17 16:29:32.199934] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190e88f8 00:17:58.335 [2024-04-17 16:29:32.201048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:15612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.335 [2024-04-17 16:29:32.201084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:58.335 [2024-04-17 16:29:32.212216] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190dece0 00:17:58.335 [2024-04-17 16:29:32.212994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.335 [2024-04-17 16:29:32.213028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:58.335 [2024-04-17 16:29:32.223450] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f4b08 00:17:58.335 [2024-04-17 16:29:32.224096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.335 [2024-04-17 16:29:32.224131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:58.335 [2024-04-17 16:29:32.235878] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190ee190 00:17:58.335 [2024-04-17 16:29:32.236620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.335 [2024-04-17 16:29:32.236656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:58.335 [2024-04-17 16:29:32.247781] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190e23b8 00:17:58.335 [2024-04-17 16:29:32.248897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.335 [2024-04-17 16:29:32.248932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:58.335 [2024-04-17 16:29:32.261522] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190e7818 00:17:58.335 [2024-04-17 16:29:32.263378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.335 [2024-04-17 16:29:32.263415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:58.335 [2024-04-17 16:29:32.270150] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190e23b8 00:17:58.335 [2024-04-17 16:29:32.270940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.335 [2024-04-17 16:29:32.270975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:58.335 [2024-04-17 16:29:32.282471] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190ff3c8 00:17:58.335 [2024-04-17 16:29:32.283296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.335 [2024-04-17 16:29:32.283332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:58.335 [2024-04-17 16:29:32.296247] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190efae0 00:17:58.335 [2024-04-17 16:29:32.297512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.335 [2024-04-17 16:29:32.297550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:58.335 [2024-04-17 16:29:32.310462] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190ec840 00:17:58.335 [2024-04-17 16:29:32.312439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.335 [2024-04-17 16:29:32.312472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:58.335 [2024-04-17 16:29:32.319007] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f1ca0 00:17:58.335 [2024-04-17 16:29:32.319969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.335 [2024-04-17 16:29:32.320003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:58.335 [2024-04-17 16:29:32.331143] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190e1710 00:17:58.335 [2024-04-17 16:29:32.332109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:3957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.335 [2024-04-17 16:29:32.332143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:58.335 [2024-04-17 16:29:32.342386] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190efae0 00:17:58.335 [2024-04-17 16:29:32.343216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.335 [2024-04-17 16:29:32.343250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:58.335 [2024-04-17 16:29:32.354222] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190e0ea0 00:17:58.335 [2024-04-17 16:29:32.355038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.335 [2024-04-17 16:29:32.355073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:58.335 [2024-04-17 16:29:32.366463] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190de470 00:17:58.335 [2024-04-17 16:29:32.367281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.335 [2024-04-17 16:29:32.367314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:58.594 [2024-04-17 16:29:32.380583] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190e49b0 00:17:58.594 [2024-04-17 16:29:32.381568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.594 [2024-04-17 16:29:32.381618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:58.594 [2024-04-17 16:29:32.392530] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f57b0 00:17:58.594 [2024-04-17 16:29:32.393899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.594 [2024-04-17 16:29:32.393934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:58.594 [2024-04-17 16:29:32.406064] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190eaab8 00:17:58.594 [2024-04-17 16:29:32.407938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:9954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.594 [2024-04-17 16:29:32.407973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:58.594 [2024-04-17 16:29:32.416161] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190e5a90 00:17:58.594 [2024-04-17 16:29:32.417312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.594 [2024-04-17 16:29:32.417345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:58.594 [2024-04-17 16:29:32.428412] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f5be8 00:17:58.594 [2024-04-17 16:29:32.429205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.594 [2024-04-17 16:29:32.429240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:58.594 [2024-04-17 16:29:32.440315] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190e6300 00:17:58.594 [2024-04-17 16:29:32.441482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.594 [2024-04-17 16:29:32.441515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:58.594 [2024-04-17 16:29:32.451711] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190dece0 00:17:58.594 [2024-04-17 16:29:32.452693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.594 [2024-04-17 16:29:32.452758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:58.594 [2024-04-17 16:29:32.463499] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f8618 00:17:58.594 [2024-04-17 16:29:32.464378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.594 [2024-04-17 16:29:32.464411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:58.594 [2024-04-17 16:29:32.477918] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f92c0 00:17:58.594 [2024-04-17 16:29:32.479757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.594 [2024-04-17 16:29:32.479836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:58.594 [2024-04-17 16:29:32.485902] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190fc560 00:17:58.594 [2024-04-17 16:29:32.486722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.594 [2024-04-17 16:29:32.486754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:58.594 [2024-04-17 16:29:32.499519] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190ee5c8 00:17:58.594 [2024-04-17 16:29:32.501063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.594 [2024-04-17 16:29:32.501097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:58.594 [2024-04-17 16:29:32.511181] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f92c0 00:17:58.594 [2024-04-17 16:29:32.512673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.594 [2024-04-17 16:29:32.512706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:58.594 [2024-04-17 16:29:32.524638] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f0bc0 00:17:58.594 [2024-04-17 16:29:32.526628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.594 [2024-04-17 16:29:32.526662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:58.594 [2024-04-17 16:29:32.533178] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190de470 00:17:58.594 [2024-04-17 16:29:32.534244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.594 [2024-04-17 16:29:32.534278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:58.594 [2024-04-17 16:29:32.545668] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190fd640 00:17:58.594 [2024-04-17 16:29:32.546873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.594 [2024-04-17 16:29:32.546908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:58.594 [2024-04-17 16:29:32.558059] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190fb048 00:17:58.594 [2024-04-17 16:29:32.559388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.594 [2024-04-17 16:29:32.559423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:58.594 [2024-04-17 16:29:32.570266] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f4298 00:17:58.594 [2024-04-17 16:29:32.571574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.594 [2024-04-17 16:29:32.571606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:58.594 [2024-04-17 16:29:32.582927] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f4b08 00:17:58.594 [2024-04-17 16:29:32.584392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.594 [2024-04-17 16:29:32.584439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:58.594 [2024-04-17 16:29:32.594197] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190e7818 00:17:58.594 [2024-04-17 16:29:32.595513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.594 [2024-04-17 16:29:32.595558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:58.594 [2024-04-17 16:29:32.605458] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190fef90 00:17:58.594 [2024-04-17 16:29:32.606634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:18415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.594 [2024-04-17 16:29:32.606680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:58.594 [2024-04-17 16:29:32.616763] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190fc128 00:17:58.594 [2024-04-17 16:29:32.617761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.594 [2024-04-17 16:29:32.617800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:58.594 [2024-04-17 16:29:32.628680] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190ea248 00:17:58.594 [2024-04-17 16:29:32.629355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.594 [2024-04-17 16:29:32.629386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:58.913 [2024-04-17 16:29:32.642239] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f46d0 00:17:58.914 [2024-04-17 16:29:32.643717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.914 [2024-04-17 16:29:32.643749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.914 [2024-04-17 16:29:32.653428] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f4b08 00:17:58.914 [2024-04-17 16:29:32.654789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.914 [2024-04-17 16:29:32.654819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:58.914 [2024-04-17 16:29:32.665811] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190de470 00:17:58.914 [2024-04-17 16:29:32.667482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:8073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.914 [2024-04-17 16:29:32.667514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.914 [2024-04-17 16:29:32.677065] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190e0a68 00:17:58.914 [2024-04-17 16:29:32.678902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.914 [2024-04-17 16:29:32.678935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.914 [2024-04-17 16:29:32.687473] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f2510 00:17:58.914 [2024-04-17 16:29:32.688340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:14868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.914 [2024-04-17 16:29:32.688369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:58.914 [2024-04-17 16:29:32.701657] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190ecc78 00:17:58.914 [2024-04-17 16:29:32.703045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.914 [2024-04-17 16:29:32.703075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:58.914 [2024-04-17 16:29:32.711160] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190eb328 00:17:58.914 [2024-04-17 16:29:32.711864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:21443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.914 [2024-04-17 16:29:32.711892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:58.914 [2024-04-17 16:29:32.726176] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f4f40 00:17:58.914 [2024-04-17 16:29:32.727931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.914 [2024-04-17 16:29:32.727962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:58.914 [2024-04-17 16:29:32.736950] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190e3498 00:17:58.914 [2024-04-17 16:29:32.738205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.914 [2024-04-17 16:29:32.738236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.914 [2024-04-17 16:29:32.747874] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190ee190 00:17:58.914 [2024-04-17 16:29:32.748561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.914 [2024-04-17 16:29:32.748592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:58.914 [2024-04-17 16:29:32.760013] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190e3d08 00:17:58.914 [2024-04-17 16:29:32.761063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.914 [2024-04-17 16:29:32.761093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:58.914 [2024-04-17 16:29:32.771040] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190edd58 00:17:58.914 [2024-04-17 16:29:32.771897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.914 [2024-04-17 16:29:32.771937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:58.914 [2024-04-17 16:29:32.782623] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190eee38 00:17:58.914 [2024-04-17 16:29:32.783370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.914 [2024-04-17 16:29:32.783400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:58.914 [2024-04-17 16:29:32.794365] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190de038 00:17:58.914 [2024-04-17 16:29:32.795109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.914 [2024-04-17 16:29:32.795139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:58.914 [2024-04-17 16:29:32.808154] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f7da8 00:17:58.914 [2024-04-17 16:29:32.809539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.914 [2024-04-17 16:29:32.809582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:58.914 [2024-04-17 16:29:32.818951] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f7538 00:17:58.914 [2024-04-17 16:29:32.820015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.914 [2024-04-17 16:29:32.820060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:58.914 [2024-04-17 16:29:32.831995] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f57b0 00:17:58.914 [2024-04-17 16:29:32.833591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.914 [2024-04-17 16:29:32.833634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:58.914 [2024-04-17 16:29:32.842395] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f7100 00:17:58.914 [2024-04-17 16:29:32.843609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.914 [2024-04-17 16:29:32.843653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:58.914 [2024-04-17 16:29:32.853364] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190ddc00 00:17:58.914 [2024-04-17 16:29:32.854272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.914 [2024-04-17 16:29:32.854303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:58.914 [2024-04-17 16:29:32.864858] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190ec408 00:17:58.914 [2024-04-17 16:29:32.866162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.914 [2024-04-17 16:29:32.866192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:58.914 [2024-04-17 16:29:32.878149] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190fd640 00:17:58.914 [2024-04-17 16:29:32.879770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.914 [2024-04-17 16:29:32.879837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:58.914 [2024-04-17 16:29:32.889332] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190eaef0 00:17:58.914 [2024-04-17 16:29:32.890752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.914 [2024-04-17 16:29:32.890799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:58.914 [2024-04-17 16:29:32.900124] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190dfdc0 00:17:58.914 [2024-04-17 16:29:32.901178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.914 [2024-04-17 16:29:32.901221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:58.914 [2024-04-17 16:29:32.911388] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190e7818 00:17:58.914 [2024-04-17 16:29:32.912124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.914 [2024-04-17 16:29:32.912153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:58.914 [2024-04-17 16:29:32.925140] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190fb480 00:17:58.914 [2024-04-17 16:29:32.926069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.914 [2024-04-17 16:29:32.926108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:58.914 [2024-04-17 16:29:32.937158] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190e95a0 00:17:58.914 [2024-04-17 16:29:32.938411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.914 [2024-04-17 16:29:32.938441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:58.914 [2024-04-17 16:29:32.948461] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f6020 00:17:58.914 [2024-04-17 16:29:32.949534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:58.914 [2024-04-17 16:29:32.949566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:59.174 [2024-04-17 16:29:32.959714] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190e12d8 00:17:59.174 [2024-04-17 16:29:32.960694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:20459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.174 [2024-04-17 16:29:32.960723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:59.174 [2024-04-17 16:29:32.972762] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f7970 00:17:59.174 [2024-04-17 16:29:32.974237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.174 [2024-04-17 16:29:32.974269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:59.174 [2024-04-17 16:29:32.982262] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190fda78 00:17:59.174 [2024-04-17 16:29:32.983066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.174 [2024-04-17 16:29:32.983098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:59.174 [2024-04-17 16:29:32.996679] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190ec840 00:17:59.174 [2024-04-17 16:29:32.998175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:28 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.174 [2024-04-17 16:29:32.998206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:59.174 [2024-04-17 16:29:33.007642] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190dece0 00:17:59.174 [2024-04-17 16:29:33.008915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:17659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.174 [2024-04-17 16:29:33.008943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:59.174 [2024-04-17 16:29:33.019033] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190e2c28 00:17:59.174 [2024-04-17 16:29:33.020142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:17294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.174 [2024-04-17 16:29:33.020186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:59.174 [2024-04-17 16:29:33.030529] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190e6738 00:17:59.174 [2024-04-17 16:29:33.031511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:17770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.174 [2024-04-17 16:29:33.031555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:59.174 [2024-04-17 16:29:33.041722] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190fb048 00:17:59.174 [2024-04-17 16:29:33.042530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.174 [2024-04-17 16:29:33.042576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:59.174 [2024-04-17 16:29:33.056370] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190ebfd0 00:17:59.174 [2024-04-17 16:29:33.057855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.174 [2024-04-17 16:29:33.057883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:59.174 [2024-04-17 16:29:33.067709] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f46d0 00:17:59.174 [2024-04-17 16:29:33.068969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.174 [2024-04-17 16:29:33.069000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:59.174 [2024-04-17 16:29:33.078951] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f96f8 00:17:59.174 [2024-04-17 16:29:33.080076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.174 [2024-04-17 16:29:33.080106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:59.174 [2024-04-17 16:29:33.090674] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190fdeb0 00:17:59.174 [2024-04-17 16:29:33.091502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.174 [2024-04-17 16:29:33.091535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:59.174 [2024-04-17 16:29:33.102295] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f4298 00:17:59.174 [2024-04-17 16:29:33.102940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.174 [2024-04-17 16:29:33.102970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:59.174 [2024-04-17 16:29:33.115820] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190ff3c8 00:17:59.174 [2024-04-17 16:29:33.117254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.174 [2024-04-17 16:29:33.117285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:59.174 [2024-04-17 16:29:33.125378] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190fd208 00:17:59.174 [2024-04-17 16:29:33.126163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.174 [2024-04-17 16:29:33.126193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:59.174 [2024-04-17 16:29:33.139502] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190f20d8 00:17:59.174 [2024-04-17 16:29:33.141110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.174 [2024-04-17 16:29:33.141138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:59.174 [2024-04-17 16:29:33.150104] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190ee5c8 00:17:59.174 [2024-04-17 16:29:33.151867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.174 [2024-04-17 16:29:33.151905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:59.174 [2024-04-17 16:29:33.163733] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190e7c50 00:17:59.174 [2024-04-17 16:29:33.164738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.174 [2024-04-17 16:29:33.164780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:59.174 [2024-04-17 16:29:33.174831] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65ad0) with pdu=0x2000190fe2e8 00:17:59.174 [2024-04-17 16:29:33.176008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.174 [2024-04-17 16:29:33.176041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:59.174 00:17:59.174 Latency(us) 00:17:59.174 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.174 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:59.174 nvme0n1 : 2.01 21431.42 83.72 0.00 0.00 5966.67 2353.34 16443.58 00:17:59.174 =================================================================================================================== 00:17:59.174 Total : 21431.42 83.72 0.00 0.00 5966.67 2353.34 16443.58 00:17:59.174 0 00:17:59.174 16:29:33 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:59.174 16:29:33 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:59.174 16:29:33 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:59.175 16:29:33 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:59.175 | .driver_specific 00:17:59.175 | .nvme_error 00:17:59.175 | .status_code 00:17:59.175 | .command_transient_transport_error' 00:17:59.433 16:29:33 -- host/digest.sh@71 -- # (( 168 > 0 )) 00:17:59.433 16:29:33 -- host/digest.sh@73 -- # killprocess 85994 00:17:59.433 16:29:33 -- common/autotest_common.sh@936 -- # '[' -z 85994 ']' 00:17:59.433 16:29:33 -- common/autotest_common.sh@940 -- # kill -0 85994 00:17:59.433 16:29:33 -- common/autotest_common.sh@941 -- # uname 00:17:59.433 16:29:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:59.433 16:29:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85994 00:17:59.433 killing process with pid 85994 00:17:59.433 Received shutdown signal, test time was about 2.000000 seconds 00:17:59.433 00:17:59.433 Latency(us) 00:17:59.433 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.433 =================================================================================================================== 00:17:59.433 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:59.433 16:29:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:59.433 16:29:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:59.433 16:29:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85994' 00:17:59.433 16:29:33 -- common/autotest_common.sh@955 -- # kill 85994 00:17:59.433 16:29:33 -- common/autotest_common.sh@960 -- # wait 85994 00:17:59.691 16:29:33 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:17:59.691 16:29:33 -- host/digest.sh@54 -- # local rw bs qd 00:17:59.691 16:29:33 -- host/digest.sh@56 -- # rw=randwrite 00:17:59.691 16:29:33 -- host/digest.sh@56 -- # bs=131072 00:17:59.691 16:29:33 -- host/digest.sh@56 -- # qd=16 00:17:59.691 16:29:33 -- host/digest.sh@58 -- # bperfpid=86089 00:17:59.691 16:29:33 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:17:59.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:59.691 16:29:33 -- host/digest.sh@60 -- # waitforlisten 86089 /var/tmp/bperf.sock 00:17:59.691 16:29:33 -- common/autotest_common.sh@817 -- # '[' -z 86089 ']' 00:17:59.691 16:29:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:59.691 16:29:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:59.691 16:29:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:59.691 16:29:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:59.691 16:29:33 -- common/autotest_common.sh@10 -- # set +x 00:17:59.950 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:59.950 Zero copy mechanism will not be used. 00:17:59.950 [2024-04-17 16:29:33.781014] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:17:59.950 [2024-04-17 16:29:33.781108] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86089 ] 00:17:59.950 [2024-04-17 16:29:33.922401] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.209 [2024-04-17 16:29:34.052236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:00.776 16:29:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:00.776 16:29:34 -- common/autotest_common.sh@850 -- # return 0 00:18:00.776 16:29:34 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:00.776 16:29:34 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:01.035 16:29:35 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:01.035 16:29:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:01.035 16:29:35 -- common/autotest_common.sh@10 -- # set +x 00:18:01.035 16:29:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:01.035 16:29:35 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:01.035 16:29:35 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:01.605 nvme0n1 00:18:01.605 16:29:35 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:01.605 16:29:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:01.605 16:29:35 -- common/autotest_common.sh@10 -- # set +x 00:18:01.605 16:29:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:01.605 16:29:35 -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:01.605 16:29:35 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:01.605 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:01.605 Zero copy mechanism will not be used. 00:18:01.605 Running I/O for 2 seconds... 00:18:01.605 [2024-04-17 16:29:35.565112] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.605 [2024-04-17 16:29:35.565418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.605 [2024-04-17 16:29:35.565458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:01.605 [2024-04-17 16:29:35.570280] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.605 [2024-04-17 16:29:35.570565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.605 [2024-04-17 16:29:35.570602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:01.605 [2024-04-17 16:29:35.575380] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.605 [2024-04-17 16:29:35.575665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.605 [2024-04-17 16:29:35.575701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:01.605 [2024-04-17 16:29:35.580470] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.605 [2024-04-17 16:29:35.580763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.605 [2024-04-17 16:29:35.580810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.605 [2024-04-17 16:29:35.585507] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.605 [2024-04-17 16:29:35.585805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.605 [2024-04-17 16:29:35.585840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:01.605 [2024-04-17 16:29:35.590592] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.605 [2024-04-17 16:29:35.590892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.605 [2024-04-17 16:29:35.590927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:01.605 [2024-04-17 16:29:35.595734] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.605 [2024-04-17 16:29:35.596060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.605 [2024-04-17 16:29:35.596099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:01.605 [2024-04-17 16:29:35.600952] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.605 [2024-04-17 16:29:35.601250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.605 [2024-04-17 16:29:35.601285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.605 [2024-04-17 16:29:35.606035] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.605 [2024-04-17 16:29:35.606333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.605 [2024-04-17 16:29:35.606369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:01.605 [2024-04-17 16:29:35.611136] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.605 [2024-04-17 16:29:35.611423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.605 [2024-04-17 16:29:35.611459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:01.605 [2024-04-17 16:29:35.616270] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.605 [2024-04-17 16:29:35.616568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.605 [2024-04-17 16:29:35.616604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:01.605 [2024-04-17 16:29:35.621473] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.605 [2024-04-17 16:29:35.621773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.605 [2024-04-17 16:29:35.621819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.605 [2024-04-17 16:29:35.626556] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.605 [2024-04-17 16:29:35.626885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.605 [2024-04-17 16:29:35.626921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:01.605 [2024-04-17 16:29:35.631754] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.605 [2024-04-17 16:29:35.632081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.605 [2024-04-17 16:29:35.632117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:01.605 [2024-04-17 16:29:35.636929] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.605 [2024-04-17 16:29:35.637227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.605 [2024-04-17 16:29:35.637263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:01.605 [2024-04-17 16:29:35.642157] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.605 [2024-04-17 16:29:35.642440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.605 [2024-04-17 16:29:35.642485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.605 [2024-04-17 16:29:35.647239] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.605 [2024-04-17 16:29:35.647554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.605 [2024-04-17 16:29:35.647590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:01.865 [2024-04-17 16:29:35.652743] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.865 [2024-04-17 16:29:35.653055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.865 [2024-04-17 16:29:35.653091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:01.865 [2024-04-17 16:29:35.657930] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.865 [2024-04-17 16:29:35.658240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.865 [2024-04-17 16:29:35.658275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:01.865 [2024-04-17 16:29:35.663021] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.865 [2024-04-17 16:29:35.663320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.865 [2024-04-17 16:29:35.663356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.865 [2024-04-17 16:29:35.668218] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.865 [2024-04-17 16:29:35.668498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.865 [2024-04-17 16:29:35.668534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:01.865 [2024-04-17 16:29:35.673351] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.865 [2024-04-17 16:29:35.673646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.865 [2024-04-17 16:29:35.673692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:01.865 [2024-04-17 16:29:35.678447] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.865 [2024-04-17 16:29:35.678746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.865 [2024-04-17 16:29:35.678794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:01.865 [2024-04-17 16:29:35.683621] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.865 [2024-04-17 16:29:35.683953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.865 [2024-04-17 16:29:35.684000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.865 [2024-04-17 16:29:35.688795] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.865 [2024-04-17 16:29:35.689091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.865 [2024-04-17 16:29:35.689127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:01.865 [2024-04-17 16:29:35.693953] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.865 [2024-04-17 16:29:35.694270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.865 [2024-04-17 16:29:35.694306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:01.865 [2024-04-17 16:29:35.698998] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.865 [2024-04-17 16:29:35.699297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.865 [2024-04-17 16:29:35.699333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:01.865 [2024-04-17 16:29:35.704127] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.865 [2024-04-17 16:29:35.704410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.865 [2024-04-17 16:29:35.704447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.865 [2024-04-17 16:29:35.709232] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.865 [2024-04-17 16:29:35.709529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.865 [2024-04-17 16:29:35.709565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:01.865 [2024-04-17 16:29:35.714316] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.865 [2024-04-17 16:29:35.714610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.865 [2024-04-17 16:29:35.714645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:01.865 [2024-04-17 16:29:35.719493] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.865 [2024-04-17 16:29:35.719814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.865 [2024-04-17 16:29:35.719863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:01.865 [2024-04-17 16:29:35.724654] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.865 [2024-04-17 16:29:35.725002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.865 [2024-04-17 16:29:35.725038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.865 [2024-04-17 16:29:35.729864] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.865 [2024-04-17 16:29:35.730186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.865 [2024-04-17 16:29:35.730221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:01.865 [2024-04-17 16:29:35.734999] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.865 [2024-04-17 16:29:35.735314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.865 [2024-04-17 16:29:35.735349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:01.865 [2024-04-17 16:29:35.740088] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.865 [2024-04-17 16:29:35.740415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.865 [2024-04-17 16:29:35.740451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:01.865 [2024-04-17 16:29:35.745100] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.865 [2024-04-17 16:29:35.745428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.865 [2024-04-17 16:29:35.745466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.865 [2024-04-17 16:29:35.750186] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.865 [2024-04-17 16:29:35.750468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.865 [2024-04-17 16:29:35.750503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:01.865 [2024-04-17 16:29:35.755267] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.865 [2024-04-17 16:29:35.755565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.866 [2024-04-17 16:29:35.755610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:01.866 [2024-04-17 16:29:35.760453] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.866 [2024-04-17 16:29:35.760756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.866 [2024-04-17 16:29:35.760800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:01.866 [2024-04-17 16:29:35.765708] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.866 [2024-04-17 16:29:35.766028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.866 [2024-04-17 16:29:35.766064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.866 [2024-04-17 16:29:35.770928] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.866 [2024-04-17 16:29:35.771208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.866 [2024-04-17 16:29:35.771241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:01.866 [2024-04-17 16:29:35.776148] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.866 [2024-04-17 16:29:35.776492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.866 [2024-04-17 16:29:35.776529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:01.866 [2024-04-17 16:29:35.781497] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.866 [2024-04-17 16:29:35.781812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.866 [2024-04-17 16:29:35.781870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:01.866 [2024-04-17 16:29:35.786908] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.866 [2024-04-17 16:29:35.787192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.866 [2024-04-17 16:29:35.787226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.866 [2024-04-17 16:29:35.792115] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.866 [2024-04-17 16:29:35.792408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.866 [2024-04-17 16:29:35.792443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:01.866 [2024-04-17 16:29:35.797099] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.866 [2024-04-17 16:29:35.797419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.866 [2024-04-17 16:29:35.797455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:01.866 [2024-04-17 16:29:35.802014] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.866 [2024-04-17 16:29:35.802327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.866 [2024-04-17 16:29:35.802363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:01.866 [2024-04-17 16:29:35.807115] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.866 [2024-04-17 16:29:35.807425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.866 [2024-04-17 16:29:35.807461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.866 [2024-04-17 16:29:35.812304] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.866 [2024-04-17 16:29:35.812608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.866 [2024-04-17 16:29:35.812642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:01.866 [2024-04-17 16:29:35.817526] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.866 [2024-04-17 16:29:35.817837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.866 [2024-04-17 16:29:35.817872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:01.866 [2024-04-17 16:29:35.822679] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.866 [2024-04-17 16:29:35.823011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.866 [2024-04-17 16:29:35.823045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:01.866 [2024-04-17 16:29:35.828074] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.866 [2024-04-17 16:29:35.828369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.866 [2024-04-17 16:29:35.828404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.866 [2024-04-17 16:29:35.833285] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.866 [2024-04-17 16:29:35.833575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.866 [2024-04-17 16:29:35.833610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:01.866 [2024-04-17 16:29:35.838588] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.866 [2024-04-17 16:29:35.838937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.866 [2024-04-17 16:29:35.838971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:01.866 [2024-04-17 16:29:35.843953] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.866 [2024-04-17 16:29:35.844251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.866 [2024-04-17 16:29:35.844287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:01.866 [2024-04-17 16:29:35.849206] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.866 [2024-04-17 16:29:35.849532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.866 [2024-04-17 16:29:35.849568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.866 [2024-04-17 16:29:35.854451] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.866 [2024-04-17 16:29:35.854768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.866 [2024-04-17 16:29:35.854822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:01.866 [2024-04-17 16:29:35.859659] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.866 [2024-04-17 16:29:35.859995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.866 [2024-04-17 16:29:35.860033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:01.866 [2024-04-17 16:29:35.864857] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.866 [2024-04-17 16:29:35.865143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.866 [2024-04-17 16:29:35.865180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:01.866 [2024-04-17 16:29:35.870027] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.866 [2024-04-17 16:29:35.870333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.866 [2024-04-17 16:29:35.870371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.866 [2024-04-17 16:29:35.875181] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.866 [2024-04-17 16:29:35.875465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.866 [2024-04-17 16:29:35.875503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:01.866 [2024-04-17 16:29:35.880340] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.866 [2024-04-17 16:29:35.880639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.866 [2024-04-17 16:29:35.880675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:01.866 [2024-04-17 16:29:35.885567] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.867 [2024-04-17 16:29:35.885879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.867 [2024-04-17 16:29:35.885915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:01.867 [2024-04-17 16:29:35.890692] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.867 [2024-04-17 16:29:35.890987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.867 [2024-04-17 16:29:35.891027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.867 [2024-04-17 16:29:35.895809] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.867 [2024-04-17 16:29:35.896114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.867 [2024-04-17 16:29:35.896152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:01.867 [2024-04-17 16:29:35.900990] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.867 [2024-04-17 16:29:35.901284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.867 [2024-04-17 16:29:35.901320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:01.867 [2024-04-17 16:29:35.906320] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:01.867 [2024-04-17 16:29:35.906615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.867 [2024-04-17 16:29:35.906653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.127 [2024-04-17 16:29:35.911524] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.127 [2024-04-17 16:29:35.911847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.127 [2024-04-17 16:29:35.911884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.127 [2024-04-17 16:29:35.916705] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.127 [2024-04-17 16:29:35.917013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.127 [2024-04-17 16:29:35.917050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.127 [2024-04-17 16:29:35.921926] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.127 [2024-04-17 16:29:35.922217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.127 [2024-04-17 16:29:35.922253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.127 [2024-04-17 16:29:35.927201] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.127 [2024-04-17 16:29:35.927482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.127 [2024-04-17 16:29:35.927520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.127 [2024-04-17 16:29:35.932358] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.127 [2024-04-17 16:29:35.932673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.127 [2024-04-17 16:29:35.932709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.127 [2024-04-17 16:29:35.937488] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.127 [2024-04-17 16:29:35.937799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.127 [2024-04-17 16:29:35.937842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.127 [2024-04-17 16:29:35.942624] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.127 [2024-04-17 16:29:35.942923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.127 [2024-04-17 16:29:35.942959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.127 [2024-04-17 16:29:35.947733] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.127 [2024-04-17 16:29:35.948030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.127 [2024-04-17 16:29:35.948067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.127 [2024-04-17 16:29:35.952944] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.127 [2024-04-17 16:29:35.953247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.127 [2024-04-17 16:29:35.953284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.127 [2024-04-17 16:29:35.958017] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.127 [2024-04-17 16:29:35.958323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.127 [2024-04-17 16:29:35.958360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.127 [2024-04-17 16:29:35.963114] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.127 [2024-04-17 16:29:35.963394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.127 [2024-04-17 16:29:35.963431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.127 [2024-04-17 16:29:35.968152] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.127 [2024-04-17 16:29:35.968436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.127 [2024-04-17 16:29:35.968474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.127 [2024-04-17 16:29:35.973370] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.127 [2024-04-17 16:29:35.973681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.127 [2024-04-17 16:29:35.973718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.127 [2024-04-17 16:29:35.978594] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.127 [2024-04-17 16:29:35.978918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.127 [2024-04-17 16:29:35.978957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.127 [2024-04-17 16:29:35.983697] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.127 [2024-04-17 16:29:35.984032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.127 [2024-04-17 16:29:35.984069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.127 [2024-04-17 16:29:35.988991] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.127 [2024-04-17 16:29:35.989298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.127 [2024-04-17 16:29:35.989334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.127 [2024-04-17 16:29:35.994138] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.127 [2024-04-17 16:29:35.994447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.127 [2024-04-17 16:29:35.994483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.127 [2024-04-17 16:29:35.999140] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.127 [2024-04-17 16:29:35.999479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.127 [2024-04-17 16:29:35.999515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.127 [2024-04-17 16:29:36.004161] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.127 [2024-04-17 16:29:36.004510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.127 [2024-04-17 16:29:36.004547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.127 [2024-04-17 16:29:36.009310] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.127 [2024-04-17 16:29:36.009655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.127 [2024-04-17 16:29:36.009692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.127 [2024-04-17 16:29:36.014644] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.127 [2024-04-17 16:29:36.014939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.127 [2024-04-17 16:29:36.014971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.127 [2024-04-17 16:29:36.019851] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.127 [2024-04-17 16:29:36.020150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.127 [2024-04-17 16:29:36.020187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.127 [2024-04-17 16:29:36.025217] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.127 [2024-04-17 16:29:36.025550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.127 [2024-04-17 16:29:36.025587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.127 [2024-04-17 16:29:36.030518] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.127 [2024-04-17 16:29:36.030826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.127 [2024-04-17 16:29:36.030863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.128 [2024-04-17 16:29:36.035732] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.128 [2024-04-17 16:29:36.036060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.128 [2024-04-17 16:29:36.036095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.128 [2024-04-17 16:29:36.040810] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.128 [2024-04-17 16:29:36.041108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.128 [2024-04-17 16:29:36.041144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.128 [2024-04-17 16:29:36.046003] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.128 [2024-04-17 16:29:36.046295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.128 [2024-04-17 16:29:36.046332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.128 [2024-04-17 16:29:36.051292] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.128 [2024-04-17 16:29:36.051616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.128 [2024-04-17 16:29:36.051652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.128 [2024-04-17 16:29:36.056484] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.128 [2024-04-17 16:29:36.056810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.128 [2024-04-17 16:29:36.056846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.128 [2024-04-17 16:29:36.061562] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.128 [2024-04-17 16:29:36.061871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.128 [2024-04-17 16:29:36.061907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.128 [2024-04-17 16:29:36.066833] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.128 [2024-04-17 16:29:36.067123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.128 [2024-04-17 16:29:36.067158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.128 [2024-04-17 16:29:36.071913] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.128 [2024-04-17 16:29:36.072232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.128 [2024-04-17 16:29:36.072264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.128 [2024-04-17 16:29:36.077120] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.128 [2024-04-17 16:29:36.077418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.128 [2024-04-17 16:29:36.077455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.128 [2024-04-17 16:29:36.082327] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.128 [2024-04-17 16:29:36.082611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.128 [2024-04-17 16:29:36.082655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.128 [2024-04-17 16:29:36.087423] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.128 [2024-04-17 16:29:36.087727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.128 [2024-04-17 16:29:36.087765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.128 [2024-04-17 16:29:36.092540] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.128 [2024-04-17 16:29:36.092849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.128 [2024-04-17 16:29:36.092886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.128 [2024-04-17 16:29:36.097683] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.128 [2024-04-17 16:29:36.097978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.128 [2024-04-17 16:29:36.098016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.128 [2024-04-17 16:29:36.102814] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.128 [2024-04-17 16:29:36.103107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.128 [2024-04-17 16:29:36.103142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.128 [2024-04-17 16:29:36.107878] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.128 [2024-04-17 16:29:36.108187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.128 [2024-04-17 16:29:36.108222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.128 [2024-04-17 16:29:36.112962] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.128 [2024-04-17 16:29:36.113252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.128 [2024-04-17 16:29:36.113288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.128 [2024-04-17 16:29:36.118066] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.128 [2024-04-17 16:29:36.118369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.128 [2024-04-17 16:29:36.118405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.128 [2024-04-17 16:29:36.123245] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.128 [2024-04-17 16:29:36.123532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.128 [2024-04-17 16:29:36.123569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.128 [2024-04-17 16:29:36.128355] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.128 [2024-04-17 16:29:36.128647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.128 [2024-04-17 16:29:36.128683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.128 [2024-04-17 16:29:36.133535] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.128 [2024-04-17 16:29:36.133857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.128 [2024-04-17 16:29:36.133892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.128 [2024-04-17 16:29:36.138581] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.128 [2024-04-17 16:29:36.138874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.128 [2024-04-17 16:29:36.138910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.128 [2024-04-17 16:29:36.143693] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.128 [2024-04-17 16:29:36.144031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.128 [2024-04-17 16:29:36.144067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.128 [2024-04-17 16:29:36.148833] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.128 [2024-04-17 16:29:36.149116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.128 [2024-04-17 16:29:36.149151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.128 [2024-04-17 16:29:36.153924] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.128 [2024-04-17 16:29:36.154216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.128 [2024-04-17 16:29:36.154251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.128 [2024-04-17 16:29:36.158962] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.128 [2024-04-17 16:29:36.159246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.128 [2024-04-17 16:29:36.159281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.128 [2024-04-17 16:29:36.164012] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.128 [2024-04-17 16:29:36.164292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.128 [2024-04-17 16:29:36.164327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.128 [2024-04-17 16:29:36.169136] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.128 [2024-04-17 16:29:36.169424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.128 [2024-04-17 16:29:36.169461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.401 [2024-04-17 16:29:36.174270] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.402 [2024-04-17 16:29:36.174561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.402 [2024-04-17 16:29:36.174597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.402 [2024-04-17 16:29:36.179337] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.402 [2024-04-17 16:29:36.179619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.402 [2024-04-17 16:29:36.179657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.402 [2024-04-17 16:29:36.184421] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.402 [2024-04-17 16:29:36.184715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.402 [2024-04-17 16:29:36.184752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.402 [2024-04-17 16:29:36.189485] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.402 [2024-04-17 16:29:36.189786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.402 [2024-04-17 16:29:36.189821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.402 [2024-04-17 16:29:36.194635] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.402 [2024-04-17 16:29:36.194930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.402 [2024-04-17 16:29:36.194968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.402 [2024-04-17 16:29:36.199624] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.402 [2024-04-17 16:29:36.199952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.402 [2024-04-17 16:29:36.199988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.402 [2024-04-17 16:29:36.204809] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.402 [2024-04-17 16:29:36.205094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.402 [2024-04-17 16:29:36.205132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.402 [2024-04-17 16:29:36.209885] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.402 [2024-04-17 16:29:36.210181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.402 [2024-04-17 16:29:36.210226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.402 [2024-04-17 16:29:36.215008] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.402 [2024-04-17 16:29:36.215292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.402 [2024-04-17 16:29:36.215329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.402 [2024-04-17 16:29:36.220083] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.402 [2024-04-17 16:29:36.220367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.402 [2024-04-17 16:29:36.220405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.402 [2024-04-17 16:29:36.225127] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.402 [2024-04-17 16:29:36.225410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.402 [2024-04-17 16:29:36.225448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.402 [2024-04-17 16:29:36.230229] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.402 [2024-04-17 16:29:36.230521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.402 [2024-04-17 16:29:36.230559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.402 [2024-04-17 16:29:36.235320] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.402 [2024-04-17 16:29:36.235611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.402 [2024-04-17 16:29:36.235648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.402 [2024-04-17 16:29:36.240381] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.402 [2024-04-17 16:29:36.240666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.402 [2024-04-17 16:29:36.240703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.402 [2024-04-17 16:29:36.245467] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.402 [2024-04-17 16:29:36.245762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.402 [2024-04-17 16:29:36.245811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.402 [2024-04-17 16:29:36.250558] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.402 [2024-04-17 16:29:36.250852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.402 [2024-04-17 16:29:36.250888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.402 [2024-04-17 16:29:36.255607] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.402 [2024-04-17 16:29:36.255902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.402 [2024-04-17 16:29:36.255938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.402 [2024-04-17 16:29:36.260679] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.402 [2024-04-17 16:29:36.260974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.402 [2024-04-17 16:29:36.261010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.402 [2024-04-17 16:29:36.265738] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.402 [2024-04-17 16:29:36.266042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.402 [2024-04-17 16:29:36.266085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.402 [2024-04-17 16:29:36.270830] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.402 [2024-04-17 16:29:36.271113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.403 [2024-04-17 16:29:36.271150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.403 [2024-04-17 16:29:36.275895] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.403 [2024-04-17 16:29:36.276190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.403 [2024-04-17 16:29:36.276226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.403 [2024-04-17 16:29:36.280988] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.403 [2024-04-17 16:29:36.281281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.403 [2024-04-17 16:29:36.281316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.403 [2024-04-17 16:29:36.286037] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.403 [2024-04-17 16:29:36.286332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.403 [2024-04-17 16:29:36.286368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.403 [2024-04-17 16:29:36.291111] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.403 [2024-04-17 16:29:36.291392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.403 [2024-04-17 16:29:36.291429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.403 [2024-04-17 16:29:36.296197] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.403 [2024-04-17 16:29:36.296495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.403 [2024-04-17 16:29:36.296531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.403 [2024-04-17 16:29:36.301303] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.403 [2024-04-17 16:29:36.301583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.403 [2024-04-17 16:29:36.301619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.403 [2024-04-17 16:29:36.306384] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.403 [2024-04-17 16:29:36.306666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.403 [2024-04-17 16:29:36.306702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.403 [2024-04-17 16:29:36.311480] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.403 [2024-04-17 16:29:36.311787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.403 [2024-04-17 16:29:36.311822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.403 [2024-04-17 16:29:36.316593] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.403 [2024-04-17 16:29:36.316936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.403 [2024-04-17 16:29:36.316971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.403 [2024-04-17 16:29:36.321796] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.403 [2024-04-17 16:29:36.322140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.403 [2024-04-17 16:29:36.322176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.403 [2024-04-17 16:29:36.327103] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.403 [2024-04-17 16:29:36.327388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.403 [2024-04-17 16:29:36.327424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.403 [2024-04-17 16:29:36.332234] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.403 [2024-04-17 16:29:36.332515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.403 [2024-04-17 16:29:36.332555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.403 [2024-04-17 16:29:36.337527] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.403 [2024-04-17 16:29:36.337876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.403 [2024-04-17 16:29:36.337911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.403 [2024-04-17 16:29:36.342684] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.403 [2024-04-17 16:29:36.342993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.403 [2024-04-17 16:29:36.343029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.403 [2024-04-17 16:29:36.347853] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.403 [2024-04-17 16:29:36.348143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.403 [2024-04-17 16:29:36.348180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.403 [2024-04-17 16:29:36.352950] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.403 [2024-04-17 16:29:36.353234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.403 [2024-04-17 16:29:36.353270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.403 [2024-04-17 16:29:36.358106] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.403 [2024-04-17 16:29:36.358390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.403 [2024-04-17 16:29:36.358427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.403 [2024-04-17 16:29:36.363399] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.403 [2024-04-17 16:29:36.363681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.403 [2024-04-17 16:29:36.363718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.403 [2024-04-17 16:29:36.368928] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.403 [2024-04-17 16:29:36.369211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.403 [2024-04-17 16:29:36.369247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.404 [2024-04-17 16:29:36.374231] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.404 [2024-04-17 16:29:36.374545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.404 [2024-04-17 16:29:36.374581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.404 [2024-04-17 16:29:36.379473] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.404 [2024-04-17 16:29:36.379754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.404 [2024-04-17 16:29:36.379802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.404 [2024-04-17 16:29:36.384598] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.404 [2024-04-17 16:29:36.384894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.404 [2024-04-17 16:29:36.384930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.404 [2024-04-17 16:29:36.389768] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.404 [2024-04-17 16:29:36.390061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.404 [2024-04-17 16:29:36.390100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.404 [2024-04-17 16:29:36.394843] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.404 [2024-04-17 16:29:36.395156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.404 [2024-04-17 16:29:36.395192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.404 [2024-04-17 16:29:36.400095] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.404 [2024-04-17 16:29:36.400377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.404 [2024-04-17 16:29:36.400414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.404 [2024-04-17 16:29:36.405147] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.404 [2024-04-17 16:29:36.405454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.404 [2024-04-17 16:29:36.405491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.404 [2024-04-17 16:29:36.410260] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.404 [2024-04-17 16:29:36.410552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.404 [2024-04-17 16:29:36.410589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.404 [2024-04-17 16:29:36.415392] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.404 [2024-04-17 16:29:36.415711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.404 [2024-04-17 16:29:36.415748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.404 [2024-04-17 16:29:36.420678] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.404 [2024-04-17 16:29:36.421013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.404 [2024-04-17 16:29:36.421049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.404 [2024-04-17 16:29:36.426045] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.404 [2024-04-17 16:29:36.426372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.404 [2024-04-17 16:29:36.426408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.404 [2024-04-17 16:29:36.431436] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.404 [2024-04-17 16:29:36.431761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.404 [2024-04-17 16:29:36.431810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.404 [2024-04-17 16:29:36.436682] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.404 [2024-04-17 16:29:36.437018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.404 [2024-04-17 16:29:36.437055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.665 [2024-04-17 16:29:36.442019] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.665 [2024-04-17 16:29:36.442331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.665 [2024-04-17 16:29:36.442367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.665 [2024-04-17 16:29:36.447169] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.665 [2024-04-17 16:29:36.447493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.665 [2024-04-17 16:29:36.447529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.665 [2024-04-17 16:29:36.452457] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.665 [2024-04-17 16:29:36.452756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.665 [2024-04-17 16:29:36.452803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.665 [2024-04-17 16:29:36.457474] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.665 [2024-04-17 16:29:36.457793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.665 [2024-04-17 16:29:36.457835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.665 [2024-04-17 16:29:36.462717] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.665 [2024-04-17 16:29:36.463069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.665 [2024-04-17 16:29:36.463112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.665 [2024-04-17 16:29:36.467829] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.665 [2024-04-17 16:29:36.468141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.665 [2024-04-17 16:29:36.468177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.665 [2024-04-17 16:29:36.472881] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.665 [2024-04-17 16:29:36.473201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.665 [2024-04-17 16:29:36.473239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.665 [2024-04-17 16:29:36.478173] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.665 [2024-04-17 16:29:36.478461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.665 [2024-04-17 16:29:36.478497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.665 [2024-04-17 16:29:36.483300] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.665 [2024-04-17 16:29:36.483588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.665 [2024-04-17 16:29:36.483631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.665 [2024-04-17 16:29:36.488485] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.665 [2024-04-17 16:29:36.488767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.665 [2024-04-17 16:29:36.488816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.665 [2024-04-17 16:29:36.493720] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.665 [2024-04-17 16:29:36.494044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.665 [2024-04-17 16:29:36.494088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.665 [2024-04-17 16:29:36.499015] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.665 [2024-04-17 16:29:36.499298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.665 [2024-04-17 16:29:36.499331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.666 [2024-04-17 16:29:36.504185] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.666 [2024-04-17 16:29:36.504466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.666 [2024-04-17 16:29:36.504498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.666 [2024-04-17 16:29:36.509460] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.666 [2024-04-17 16:29:36.509755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.666 [2024-04-17 16:29:36.509803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.666 [2024-04-17 16:29:36.514762] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.666 [2024-04-17 16:29:36.515091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.666 [2024-04-17 16:29:36.515125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.666 [2024-04-17 16:29:36.520216] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.666 [2024-04-17 16:29:36.520530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.666 [2024-04-17 16:29:36.520568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.666 [2024-04-17 16:29:36.525505] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.666 [2024-04-17 16:29:36.525813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.666 [2024-04-17 16:29:36.525852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.666 [2024-04-17 16:29:36.530701] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.666 [2024-04-17 16:29:36.531026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.666 [2024-04-17 16:29:36.531062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.666 [2024-04-17 16:29:36.535872] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.666 [2024-04-17 16:29:36.536176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.666 [2024-04-17 16:29:36.536211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.666 [2024-04-17 16:29:36.541054] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.666 [2024-04-17 16:29:36.541354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.666 [2024-04-17 16:29:36.541392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.666 [2024-04-17 16:29:36.546183] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.666 [2024-04-17 16:29:36.546474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.666 [2024-04-17 16:29:36.546510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.666 [2024-04-17 16:29:36.551261] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.666 [2024-04-17 16:29:36.551549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.666 [2024-04-17 16:29:36.551586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.666 [2024-04-17 16:29:36.556371] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.666 [2024-04-17 16:29:36.556657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.666 [2024-04-17 16:29:36.556694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.666 [2024-04-17 16:29:36.561601] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.666 [2024-04-17 16:29:36.561909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.666 [2024-04-17 16:29:36.561946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.666 [2024-04-17 16:29:36.566908] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.666 [2024-04-17 16:29:36.567205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.666 [2024-04-17 16:29:36.567236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.666 [2024-04-17 16:29:36.572003] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.666 [2024-04-17 16:29:36.572285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.666 [2024-04-17 16:29:36.572322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.666 [2024-04-17 16:29:36.577027] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.666 [2024-04-17 16:29:36.577310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.666 [2024-04-17 16:29:36.577342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.666 [2024-04-17 16:29:36.582127] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.666 [2024-04-17 16:29:36.582409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.666 [2024-04-17 16:29:36.582446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.666 [2024-04-17 16:29:36.587219] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.666 [2024-04-17 16:29:36.587500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.666 [2024-04-17 16:29:36.587536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.666 [2024-04-17 16:29:36.592296] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.666 [2024-04-17 16:29:36.592599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.666 [2024-04-17 16:29:36.592635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.666 [2024-04-17 16:29:36.597449] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.666 [2024-04-17 16:29:36.597743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.666 [2024-04-17 16:29:36.597793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.666 [2024-04-17 16:29:36.602689] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.666 [2024-04-17 16:29:36.603000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.666 [2024-04-17 16:29:36.603036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.666 [2024-04-17 16:29:36.608092] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.666 [2024-04-17 16:29:36.608373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.666 [2024-04-17 16:29:36.608410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.666 [2024-04-17 16:29:36.613185] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.666 [2024-04-17 16:29:36.613468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.666 [2024-04-17 16:29:36.613501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.666 [2024-04-17 16:29:36.618413] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.666 [2024-04-17 16:29:36.618707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.666 [2024-04-17 16:29:36.618745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.666 [2024-04-17 16:29:36.623548] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.666 [2024-04-17 16:29:36.623845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.666 [2024-04-17 16:29:36.623881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.666 [2024-04-17 16:29:36.628700] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.666 [2024-04-17 16:29:36.629009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.666 [2024-04-17 16:29:36.629045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.666 [2024-04-17 16:29:36.634024] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.666 [2024-04-17 16:29:36.634316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.666 [2024-04-17 16:29:36.634354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.666 [2024-04-17 16:29:36.639461] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.667 [2024-04-17 16:29:36.639773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.667 [2024-04-17 16:29:36.639823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.667 [2024-04-17 16:29:36.644664] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.667 [2024-04-17 16:29:36.644957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.667 [2024-04-17 16:29:36.645002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.667 [2024-04-17 16:29:36.649916] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.667 [2024-04-17 16:29:36.650224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.667 [2024-04-17 16:29:36.650261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.667 [2024-04-17 16:29:36.655073] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.667 [2024-04-17 16:29:36.655371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.667 [2024-04-17 16:29:36.655415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.667 [2024-04-17 16:29:36.660297] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.667 [2024-04-17 16:29:36.660595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.667 [2024-04-17 16:29:36.660632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.667 [2024-04-17 16:29:36.665369] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.667 [2024-04-17 16:29:36.665682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.667 [2024-04-17 16:29:36.665719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.667 [2024-04-17 16:29:36.670600] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.667 [2024-04-17 16:29:36.670922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.667 [2024-04-17 16:29:36.670958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.667 [2024-04-17 16:29:36.675771] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.667 [2024-04-17 16:29:36.676068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.667 [2024-04-17 16:29:36.676103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.667 [2024-04-17 16:29:36.680897] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.667 [2024-04-17 16:29:36.681195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.667 [2024-04-17 16:29:36.681231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.667 [2024-04-17 16:29:36.686059] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.667 [2024-04-17 16:29:36.686361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.667 [2024-04-17 16:29:36.686419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.667 [2024-04-17 16:29:36.691341] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.667 [2024-04-17 16:29:36.691653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.667 [2024-04-17 16:29:36.691690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.667 [2024-04-17 16:29:36.696692] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.667 [2024-04-17 16:29:36.696989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.667 [2024-04-17 16:29:36.697025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.667 [2024-04-17 16:29:36.702006] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.667 [2024-04-17 16:29:36.702299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.667 [2024-04-17 16:29:36.702334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.667 [2024-04-17 16:29:36.707187] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.667 [2024-04-17 16:29:36.707476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.667 [2024-04-17 16:29:36.707510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.927 [2024-04-17 16:29:36.712338] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.927 [2024-04-17 16:29:36.712648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.927 [2024-04-17 16:29:36.712683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.927 [2024-04-17 16:29:36.717536] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.927 [2024-04-17 16:29:36.717861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.927 [2024-04-17 16:29:36.717897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.927 [2024-04-17 16:29:36.722898] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.927 [2024-04-17 16:29:36.723182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.927 [2024-04-17 16:29:36.723223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.927 [2024-04-17 16:29:36.728146] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.927 [2024-04-17 16:29:36.728475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.927 [2024-04-17 16:29:36.728512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.928 [2024-04-17 16:29:36.733501] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.928 [2024-04-17 16:29:36.733844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.928 [2024-04-17 16:29:36.733891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.928 [2024-04-17 16:29:36.738773] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.928 [2024-04-17 16:29:36.739099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.928 [2024-04-17 16:29:36.739139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.928 [2024-04-17 16:29:36.744106] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.928 [2024-04-17 16:29:36.744418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.928 [2024-04-17 16:29:36.744452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.928 [2024-04-17 16:29:36.749386] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.928 [2024-04-17 16:29:36.749668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.928 [2024-04-17 16:29:36.749700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.928 [2024-04-17 16:29:36.754590] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.928 [2024-04-17 16:29:36.754914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.928 [2024-04-17 16:29:36.754944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.928 [2024-04-17 16:29:36.759977] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.928 [2024-04-17 16:29:36.760260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.928 [2024-04-17 16:29:36.760299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.928 [2024-04-17 16:29:36.765390] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.928 [2024-04-17 16:29:36.765698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.928 [2024-04-17 16:29:36.765732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.928 [2024-04-17 16:29:36.770704] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.928 [2024-04-17 16:29:36.771048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.928 [2024-04-17 16:29:36.771082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.928 [2024-04-17 16:29:36.775867] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.928 [2024-04-17 16:29:36.776163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.928 [2024-04-17 16:29:36.776193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.928 [2024-04-17 16:29:36.780964] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.928 [2024-04-17 16:29:36.781247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.928 [2024-04-17 16:29:36.781273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.928 [2024-04-17 16:29:36.786004] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.928 [2024-04-17 16:29:36.786295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.928 [2024-04-17 16:29:36.786324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.928 [2024-04-17 16:29:36.791122] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.928 [2024-04-17 16:29:36.791403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.928 [2024-04-17 16:29:36.791433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.928 [2024-04-17 16:29:36.796180] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.928 [2024-04-17 16:29:36.796462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.928 [2024-04-17 16:29:36.796493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.928 [2024-04-17 16:29:36.801262] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.928 [2024-04-17 16:29:36.801542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.928 [2024-04-17 16:29:36.801572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.928 [2024-04-17 16:29:36.806423] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.928 [2024-04-17 16:29:36.806710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.928 [2024-04-17 16:29:36.806742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.928 [2024-04-17 16:29:36.811505] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.928 [2024-04-17 16:29:36.811832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.928 [2024-04-17 16:29:36.811866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.928 [2024-04-17 16:29:36.816624] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.928 [2024-04-17 16:29:36.816935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.928 [2024-04-17 16:29:36.816968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.928 [2024-04-17 16:29:36.821730] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.928 [2024-04-17 16:29:36.822047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.928 [2024-04-17 16:29:36.822090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.928 [2024-04-17 16:29:36.826915] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.928 [2024-04-17 16:29:36.827210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.928 [2024-04-17 16:29:36.827241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.928 [2024-04-17 16:29:36.832037] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.928 [2024-04-17 16:29:36.832323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.928 [2024-04-17 16:29:36.832355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.928 [2024-04-17 16:29:36.837082] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.928 [2024-04-17 16:29:36.837365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.928 [2024-04-17 16:29:36.837397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.928 [2024-04-17 16:29:36.842178] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.928 [2024-04-17 16:29:36.842461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.928 [2024-04-17 16:29:36.842507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.928 [2024-04-17 16:29:36.847321] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.928 [2024-04-17 16:29:36.847616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.928 [2024-04-17 16:29:36.847652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.928 [2024-04-17 16:29:36.852540] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.928 [2024-04-17 16:29:36.852851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.928 [2024-04-17 16:29:36.852886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.928 [2024-04-17 16:29:36.857818] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.928 [2024-04-17 16:29:36.858110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.928 [2024-04-17 16:29:36.858145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.928 [2024-04-17 16:29:36.862952] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.928 [2024-04-17 16:29:36.863233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.928 [2024-04-17 16:29:36.863278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.928 [2024-04-17 16:29:36.868141] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.928 [2024-04-17 16:29:36.868423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.928 [2024-04-17 16:29:36.868458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.928 [2024-04-17 16:29:36.873460] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.929 [2024-04-17 16:29:36.873758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.929 [2024-04-17 16:29:36.873805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.929 [2024-04-17 16:29:36.878647] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.929 [2024-04-17 16:29:36.878956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.929 [2024-04-17 16:29:36.879000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.929 [2024-04-17 16:29:36.883975] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.929 [2024-04-17 16:29:36.884283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.929 [2024-04-17 16:29:36.884318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.929 [2024-04-17 16:29:36.889146] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.929 [2024-04-17 16:29:36.889431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.929 [2024-04-17 16:29:36.889467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.929 [2024-04-17 16:29:36.894463] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.929 [2024-04-17 16:29:36.894763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.929 [2024-04-17 16:29:36.894809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.929 [2024-04-17 16:29:36.899761] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.929 [2024-04-17 16:29:36.900078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.929 [2024-04-17 16:29:36.900113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.929 [2024-04-17 16:29:36.904937] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.929 [2024-04-17 16:29:36.905218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.929 [2024-04-17 16:29:36.905249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.929 [2024-04-17 16:29:36.910028] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.929 [2024-04-17 16:29:36.910332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.929 [2024-04-17 16:29:36.910376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.929 [2024-04-17 16:29:36.915213] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.929 [2024-04-17 16:29:36.915558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.929 [2024-04-17 16:29:36.915595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.929 [2024-04-17 16:29:36.920539] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.929 [2024-04-17 16:29:36.920872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.929 [2024-04-17 16:29:36.920906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.929 [2024-04-17 16:29:36.925781] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.929 [2024-04-17 16:29:36.926100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.929 [2024-04-17 16:29:36.926135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.929 [2024-04-17 16:29:36.931006] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.929 [2024-04-17 16:29:36.931290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.929 [2024-04-17 16:29:36.931326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.929 [2024-04-17 16:29:36.936191] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.929 [2024-04-17 16:29:36.936485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.929 [2024-04-17 16:29:36.936522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.929 [2024-04-17 16:29:36.941410] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.929 [2024-04-17 16:29:36.941706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.929 [2024-04-17 16:29:36.941742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.929 [2024-04-17 16:29:36.946693] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.929 [2024-04-17 16:29:36.947002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.929 [2024-04-17 16:29:36.947042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.929 [2024-04-17 16:29:36.951930] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.929 [2024-04-17 16:29:36.952211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.929 [2024-04-17 16:29:36.952241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.929 [2024-04-17 16:29:36.957155] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.929 [2024-04-17 16:29:36.957446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.929 [2024-04-17 16:29:36.957477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:02.929 [2024-04-17 16:29:36.962387] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.929 [2024-04-17 16:29:36.962667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.929 [2024-04-17 16:29:36.962698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:02.929 [2024-04-17 16:29:36.967617] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:02.929 [2024-04-17 16:29:36.967912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.929 [2024-04-17 16:29:36.967942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.190 [2024-04-17 16:29:36.972751] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.190 [2024-04-17 16:29:36.973049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.190 [2024-04-17 16:29:36.973090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.190 [2024-04-17 16:29:36.977866] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.190 [2024-04-17 16:29:36.978166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.190 [2024-04-17 16:29:36.978193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.190 [2024-04-17 16:29:36.982952] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.190 [2024-04-17 16:29:36.983233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.190 [2024-04-17 16:29:36.983267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.190 [2024-04-17 16:29:36.988228] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.190 [2024-04-17 16:29:36.988523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.190 [2024-04-17 16:29:36.988558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.190 [2024-04-17 16:29:36.993375] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.190 [2024-04-17 16:29:36.993657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.190 [2024-04-17 16:29:36.993697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.190 [2024-04-17 16:29:36.998533] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.190 [2024-04-17 16:29:36.998848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.190 [2024-04-17 16:29:36.998884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.190 [2024-04-17 16:29:37.003633] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.190 [2024-04-17 16:29:37.003937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.190 [2024-04-17 16:29:37.003969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.190 [2024-04-17 16:29:37.008720] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.190 [2024-04-17 16:29:37.009018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.190 [2024-04-17 16:29:37.009061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.190 [2024-04-17 16:29:37.013862] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.190 [2024-04-17 16:29:37.014156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.190 [2024-04-17 16:29:37.014187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.190 [2024-04-17 16:29:37.018970] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.190 [2024-04-17 16:29:37.019263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.190 [2024-04-17 16:29:37.019298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.190 [2024-04-17 16:29:37.024048] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.190 [2024-04-17 16:29:37.024333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.190 [2024-04-17 16:29:37.024364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.190 [2024-04-17 16:29:37.029135] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.190 [2024-04-17 16:29:37.029419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.190 [2024-04-17 16:29:37.029455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.190 [2024-04-17 16:29:37.034314] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.190 [2024-04-17 16:29:37.034602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.190 [2024-04-17 16:29:37.034640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.190 [2024-04-17 16:29:37.039449] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.190 [2024-04-17 16:29:37.039746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.190 [2024-04-17 16:29:37.039794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.190 [2024-04-17 16:29:37.044565] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.190 [2024-04-17 16:29:37.044861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.190 [2024-04-17 16:29:37.044889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.190 [2024-04-17 16:29:37.049664] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.191 [2024-04-17 16:29:37.049961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.191 [2024-04-17 16:29:37.049992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.191 [2024-04-17 16:29:37.054811] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.191 [2024-04-17 16:29:37.055091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.191 [2024-04-17 16:29:37.055126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.191 [2024-04-17 16:29:37.059935] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.191 [2024-04-17 16:29:37.060220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.191 [2024-04-17 16:29:37.060258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.191 [2024-04-17 16:29:37.065064] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.191 [2024-04-17 16:29:37.065347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.191 [2024-04-17 16:29:37.065384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.191 [2024-04-17 16:29:37.070195] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.191 [2024-04-17 16:29:37.070487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.191 [2024-04-17 16:29:37.070524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.191 [2024-04-17 16:29:37.075453] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.191 [2024-04-17 16:29:37.075750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.191 [2024-04-17 16:29:37.075794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.191 [2024-04-17 16:29:37.080718] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.191 [2024-04-17 16:29:37.081017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.191 [2024-04-17 16:29:37.081063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.191 [2024-04-17 16:29:37.085703] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.191 [2024-04-17 16:29:37.086013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.191 [2024-04-17 16:29:37.086046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.191 [2024-04-17 16:29:37.090687] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.191 [2024-04-17 16:29:37.090968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.191 [2024-04-17 16:29:37.091001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.191 [2024-04-17 16:29:37.095616] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.191 [2024-04-17 16:29:37.095912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.191 [2024-04-17 16:29:37.095940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.191 [2024-04-17 16:29:37.100500] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.191 [2024-04-17 16:29:37.100765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.191 [2024-04-17 16:29:37.100801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.191 [2024-04-17 16:29:37.105489] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.191 [2024-04-17 16:29:37.105754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.191 [2024-04-17 16:29:37.105797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.191 [2024-04-17 16:29:37.110382] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.191 [2024-04-17 16:29:37.110648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.191 [2024-04-17 16:29:37.110680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.191 [2024-04-17 16:29:37.115302] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.191 [2024-04-17 16:29:37.115581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.191 [2024-04-17 16:29:37.115612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.191 [2024-04-17 16:29:37.120356] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.191 [2024-04-17 16:29:37.120621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.191 [2024-04-17 16:29:37.120652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.191 [2024-04-17 16:29:37.125434] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.191 [2024-04-17 16:29:37.125701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.191 [2024-04-17 16:29:37.125733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.191 [2024-04-17 16:29:37.130347] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.191 [2024-04-17 16:29:37.130613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.191 [2024-04-17 16:29:37.130646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.191 [2024-04-17 16:29:37.135221] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.191 [2024-04-17 16:29:37.135492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.191 [2024-04-17 16:29:37.135525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.191 [2024-04-17 16:29:37.140106] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.191 [2024-04-17 16:29:37.140371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.191 [2024-04-17 16:29:37.140404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.191 [2024-04-17 16:29:37.145030] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.191 [2024-04-17 16:29:37.145294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.191 [2024-04-17 16:29:37.145325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.191 [2024-04-17 16:29:37.149976] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.191 [2024-04-17 16:29:37.150252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.191 [2024-04-17 16:29:37.150283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.191 [2024-04-17 16:29:37.154854] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.191 [2024-04-17 16:29:37.155110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.191 [2024-04-17 16:29:37.155134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.191 [2024-04-17 16:29:37.159522] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.191 [2024-04-17 16:29:37.159788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.191 [2024-04-17 16:29:37.159825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.191 [2024-04-17 16:29:37.164165] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.191 [2024-04-17 16:29:37.164419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.191 [2024-04-17 16:29:37.164451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.191 [2024-04-17 16:29:37.168787] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.191 [2024-04-17 16:29:37.169062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.191 [2024-04-17 16:29:37.169094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.191 [2024-04-17 16:29:37.173434] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.191 [2024-04-17 16:29:37.173686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.191 [2024-04-17 16:29:37.173718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.191 [2024-04-17 16:29:37.178123] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.191 [2024-04-17 16:29:37.178377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.191 [2024-04-17 16:29:37.178407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.191 [2024-04-17 16:29:37.182713] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.191 [2024-04-17 16:29:37.182980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.192 [2024-04-17 16:29:37.183014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.192 [2024-04-17 16:29:37.187358] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.192 [2024-04-17 16:29:37.187610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.192 [2024-04-17 16:29:37.187642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.192 [2024-04-17 16:29:37.192011] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.192 [2024-04-17 16:29:37.192267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.192 [2024-04-17 16:29:37.192306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.192 [2024-04-17 16:29:37.196685] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.192 [2024-04-17 16:29:37.196949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.192 [2024-04-17 16:29:37.196984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.192 [2024-04-17 16:29:37.201309] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.192 [2024-04-17 16:29:37.201562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.192 [2024-04-17 16:29:37.201595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.192 [2024-04-17 16:29:37.205982] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.192 [2024-04-17 16:29:37.206246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.192 [2024-04-17 16:29:37.206268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.192 [2024-04-17 16:29:37.210573] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.192 [2024-04-17 16:29:37.210838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.192 [2024-04-17 16:29:37.210875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.192 [2024-04-17 16:29:37.215169] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.192 [2024-04-17 16:29:37.215421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.192 [2024-04-17 16:29:37.215453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.192 [2024-04-17 16:29:37.219862] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.192 [2024-04-17 16:29:37.220115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.192 [2024-04-17 16:29:37.220138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.192 [2024-04-17 16:29:37.224489] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.192 [2024-04-17 16:29:37.224743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.192 [2024-04-17 16:29:37.224787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.192 [2024-04-17 16:29:37.229107] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.192 [2024-04-17 16:29:37.229359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.192 [2024-04-17 16:29:37.229391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.192 [2024-04-17 16:29:37.233754] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.192 [2024-04-17 16:29:37.234020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.192 [2024-04-17 16:29:37.234049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.453 [2024-04-17 16:29:37.238429] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.453 [2024-04-17 16:29:37.238719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.453 [2024-04-17 16:29:37.238751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.453 [2024-04-17 16:29:37.243179] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.453 [2024-04-17 16:29:37.243430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.453 [2024-04-17 16:29:37.243464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.453 [2024-04-17 16:29:37.247869] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.454 [2024-04-17 16:29:37.248135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.454 [2024-04-17 16:29:37.248166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.454 [2024-04-17 16:29:37.252582] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.454 [2024-04-17 16:29:37.252852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.454 [2024-04-17 16:29:37.252884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.454 [2024-04-17 16:29:37.257790] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.454 [2024-04-17 16:29:37.258054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.454 [2024-04-17 16:29:37.258093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.454 [2024-04-17 16:29:37.262473] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.454 [2024-04-17 16:29:37.262725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.454 [2024-04-17 16:29:37.262755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.454 [2024-04-17 16:29:37.267153] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.454 [2024-04-17 16:29:37.267406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.454 [2024-04-17 16:29:37.267439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.454 [2024-04-17 16:29:37.271729] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.454 [2024-04-17 16:29:37.271998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.454 [2024-04-17 16:29:37.272032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.454 [2024-04-17 16:29:37.276399] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.454 [2024-04-17 16:29:37.276650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.454 [2024-04-17 16:29:37.276701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.454 [2024-04-17 16:29:37.281146] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.454 [2024-04-17 16:29:37.281398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.454 [2024-04-17 16:29:37.281429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.454 [2024-04-17 16:29:37.285739] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.454 [2024-04-17 16:29:37.286008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.454 [2024-04-17 16:29:37.286046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.454 [2024-04-17 16:29:37.290370] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.454 [2024-04-17 16:29:37.290622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.454 [2024-04-17 16:29:37.290646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.454 [2024-04-17 16:29:37.295003] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.454 [2024-04-17 16:29:37.295257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.454 [2024-04-17 16:29:37.295292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.454 [2024-04-17 16:29:37.299589] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.454 [2024-04-17 16:29:37.299856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.454 [2024-04-17 16:29:37.299890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.454 [2024-04-17 16:29:37.304196] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.454 [2024-04-17 16:29:37.304458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.454 [2024-04-17 16:29:37.304488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.454 [2024-04-17 16:29:37.308904] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.454 [2024-04-17 16:29:37.309159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.454 [2024-04-17 16:29:37.309194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.454 [2024-04-17 16:29:37.313507] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.454 [2024-04-17 16:29:37.313761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.454 [2024-04-17 16:29:37.313804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.454 [2024-04-17 16:29:37.318165] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.454 [2024-04-17 16:29:37.318426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.454 [2024-04-17 16:29:37.318468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.454 [2024-04-17 16:29:37.322817] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.454 [2024-04-17 16:29:37.323082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.454 [2024-04-17 16:29:37.323112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.454 [2024-04-17 16:29:37.327437] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.454 [2024-04-17 16:29:37.327687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.454 [2024-04-17 16:29:37.327718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.454 [2024-04-17 16:29:37.332054] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.454 [2024-04-17 16:29:37.332306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.454 [2024-04-17 16:29:37.332346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.454 [2024-04-17 16:29:37.336750] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.454 [2024-04-17 16:29:37.337017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.454 [2024-04-17 16:29:37.337047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.454 [2024-04-17 16:29:37.341407] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.454 [2024-04-17 16:29:37.341659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.454 [2024-04-17 16:29:37.341690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.454 [2024-04-17 16:29:37.346141] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.454 [2024-04-17 16:29:37.346399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.454 [2024-04-17 16:29:37.346429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.454 [2024-04-17 16:29:37.350812] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.454 [2024-04-17 16:29:37.351070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.454 [2024-04-17 16:29:37.351099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.454 [2024-04-17 16:29:37.355435] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.454 [2024-04-17 16:29:37.355684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.454 [2024-04-17 16:29:37.355714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.454 [2024-04-17 16:29:37.360116] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.454 [2024-04-17 16:29:37.360367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.454 [2024-04-17 16:29:37.360397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.454 [2024-04-17 16:29:37.364747] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.454 [2024-04-17 16:29:37.365023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.454 [2024-04-17 16:29:37.365053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.454 [2024-04-17 16:29:37.369498] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.454 [2024-04-17 16:29:37.369777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.454 [2024-04-17 16:29:37.369818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.454 [2024-04-17 16:29:37.374242] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.454 [2024-04-17 16:29:37.374495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.455 [2024-04-17 16:29:37.374526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.455 [2024-04-17 16:29:37.378927] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.455 [2024-04-17 16:29:37.379179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.455 [2024-04-17 16:29:37.379214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.455 [2024-04-17 16:29:37.383578] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.455 [2024-04-17 16:29:37.383847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.455 [2024-04-17 16:29:37.383881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.455 [2024-04-17 16:29:37.388237] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.455 [2024-04-17 16:29:37.388492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.455 [2024-04-17 16:29:37.388530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.455 [2024-04-17 16:29:37.392865] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.455 [2024-04-17 16:29:37.393119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.455 [2024-04-17 16:29:37.393154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.455 [2024-04-17 16:29:37.397505] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.455 [2024-04-17 16:29:37.397757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.455 [2024-04-17 16:29:37.397803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.455 [2024-04-17 16:29:37.402167] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.455 [2024-04-17 16:29:37.402419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.455 [2024-04-17 16:29:37.402453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.455 [2024-04-17 16:29:37.406852] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.455 [2024-04-17 16:29:37.407114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.455 [2024-04-17 16:29:37.407150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.455 [2024-04-17 16:29:37.411458] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.455 [2024-04-17 16:29:37.411713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.455 [2024-04-17 16:29:37.411744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.455 [2024-04-17 16:29:37.416176] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.455 [2024-04-17 16:29:37.416431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.455 [2024-04-17 16:29:37.416462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.455 [2024-04-17 16:29:37.420849] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.455 [2024-04-17 16:29:37.421101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.455 [2024-04-17 16:29:37.421142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.455 [2024-04-17 16:29:37.425508] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.455 [2024-04-17 16:29:37.425759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.455 [2024-04-17 16:29:37.425803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.455 [2024-04-17 16:29:37.430109] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.455 [2024-04-17 16:29:37.430373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.455 [2024-04-17 16:29:37.430408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.455 [2024-04-17 16:29:37.434732] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.455 [2024-04-17 16:29:37.435001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.455 [2024-04-17 16:29:37.435034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.455 [2024-04-17 16:29:37.439357] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.455 [2024-04-17 16:29:37.439610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.455 [2024-04-17 16:29:37.439644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.455 [2024-04-17 16:29:37.443969] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.455 [2024-04-17 16:29:37.444225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.455 [2024-04-17 16:29:37.444257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.455 [2024-04-17 16:29:37.448674] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.455 [2024-04-17 16:29:37.448943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.455 [2024-04-17 16:29:37.448975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.455 [2024-04-17 16:29:37.453275] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.455 [2024-04-17 16:29:37.453526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.455 [2024-04-17 16:29:37.453556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.455 [2024-04-17 16:29:37.457969] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.455 [2024-04-17 16:29:37.458232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.455 [2024-04-17 16:29:37.458262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.455 [2024-04-17 16:29:37.462601] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.455 [2024-04-17 16:29:37.462863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.455 [2024-04-17 16:29:37.462890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.455 [2024-04-17 16:29:37.467211] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.455 [2024-04-17 16:29:37.467474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.455 [2024-04-17 16:29:37.467504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.455 [2024-04-17 16:29:37.471826] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.455 [2024-04-17 16:29:37.472090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.455 [2024-04-17 16:29:37.472120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.455 [2024-04-17 16:29:37.476494] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.455 [2024-04-17 16:29:37.476747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.455 [2024-04-17 16:29:37.476786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.455 [2024-04-17 16:29:37.481063] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.455 [2024-04-17 16:29:37.481317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.455 [2024-04-17 16:29:37.481355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.455 [2024-04-17 16:29:37.485749] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.455 [2024-04-17 16:29:37.486015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.455 [2024-04-17 16:29:37.486045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.455 [2024-04-17 16:29:37.490421] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.455 [2024-04-17 16:29:37.490673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.455 [2024-04-17 16:29:37.490703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.455 [2024-04-17 16:29:37.495033] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.455 [2024-04-17 16:29:37.495286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.455 [2024-04-17 16:29:37.495319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.714 [2024-04-17 16:29:37.499661] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.714 [2024-04-17 16:29:37.499935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.714 [2024-04-17 16:29:37.499966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.714 [2024-04-17 16:29:37.504370] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.714 [2024-04-17 16:29:37.504621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.714 [2024-04-17 16:29:37.504652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.714 [2024-04-17 16:29:37.509081] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.714 [2024-04-17 16:29:37.509342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.714 [2024-04-17 16:29:37.509382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.714 [2024-04-17 16:29:37.513804] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.714 [2024-04-17 16:29:37.514058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.715 [2024-04-17 16:29:37.514098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.715 [2024-04-17 16:29:37.518543] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.715 [2024-04-17 16:29:37.518807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.715 [2024-04-17 16:29:37.518836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.715 [2024-04-17 16:29:37.523250] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.715 [2024-04-17 16:29:37.523500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.715 [2024-04-17 16:29:37.523531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.715 [2024-04-17 16:29:37.527861] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.715 [2024-04-17 16:29:37.528111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.715 [2024-04-17 16:29:37.528140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.715 [2024-04-17 16:29:37.532537] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.715 [2024-04-17 16:29:37.532801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.715 [2024-04-17 16:29:37.532831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.715 [2024-04-17 16:29:37.537221] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.715 [2024-04-17 16:29:37.537472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.715 [2024-04-17 16:29:37.537502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.715 [2024-04-17 16:29:37.541832] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.715 [2024-04-17 16:29:37.542092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.715 [2024-04-17 16:29:37.542122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.715 [2024-04-17 16:29:37.546621] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.715 [2024-04-17 16:29:37.546916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.715 [2024-04-17 16:29:37.546946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.715 [2024-04-17 16:29:37.551377] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd65c70) with pdu=0x2000190fef90 00:18:03.715 [2024-04-17 16:29:37.551629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.715 [2024-04-17 16:29:37.551660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.715 00:18:03.715 Latency(us) 00:18:03.715 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.715 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:03.715 nvme0n1 : 2.00 6109.13 763.64 0.00 0.00 2613.38 1794.79 11319.85 00:18:03.715 =================================================================================================================== 00:18:03.715 Total : 6109.13 763.64 0.00 0.00 2613.38 1794.79 11319.85 00:18:03.715 0 00:18:03.715 16:29:37 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:03.715 16:29:37 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:03.715 16:29:37 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:03.715 16:29:37 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:03.715 | .driver_specific 00:18:03.715 | .nvme_error 00:18:03.715 | .status_code 00:18:03.715 | .command_transient_transport_error' 00:18:03.974 16:29:37 -- host/digest.sh@71 -- # (( 394 > 0 )) 00:18:03.974 16:29:37 -- host/digest.sh@73 -- # killprocess 86089 00:18:03.974 16:29:37 -- common/autotest_common.sh@936 -- # '[' -z 86089 ']' 00:18:03.974 16:29:37 -- common/autotest_common.sh@940 -- # kill -0 86089 00:18:03.974 16:29:37 -- common/autotest_common.sh@941 -- # uname 00:18:03.974 16:29:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:03.974 16:29:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86089 00:18:03.974 16:29:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:03.974 16:29:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:03.974 16:29:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86089' 00:18:03.974 killing process with pid 86089 00:18:03.974 Received shutdown signal, test time was about 2.000000 seconds 00:18:03.974 00:18:03.974 Latency(us) 00:18:03.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.974 =================================================================================================================== 00:18:03.974 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:03.974 16:29:37 -- common/autotest_common.sh@955 -- # kill 86089 00:18:03.974 16:29:37 -- common/autotest_common.sh@960 -- # wait 86089 00:18:04.232 16:29:38 -- host/digest.sh@116 -- # killprocess 85768 00:18:04.232 16:29:38 -- common/autotest_common.sh@936 -- # '[' -z 85768 ']' 00:18:04.232 16:29:38 -- common/autotest_common.sh@940 -- # kill -0 85768 00:18:04.232 16:29:38 -- common/autotest_common.sh@941 -- # uname 00:18:04.232 16:29:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:04.232 16:29:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85768 00:18:04.232 16:29:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:04.232 16:29:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:04.232 16:29:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85768' 00:18:04.232 killing process with pid 85768 00:18:04.232 16:29:38 -- common/autotest_common.sh@955 -- # kill 85768 00:18:04.232 16:29:38 -- common/autotest_common.sh@960 -- # wait 85768 00:18:04.491 00:18:04.491 real 0m19.082s 00:18:04.491 user 0m36.925s 00:18:04.491 sys 0m4.576s 00:18:04.491 16:29:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:04.491 16:29:38 -- common/autotest_common.sh@10 -- # set +x 00:18:04.491 ************************************ 00:18:04.491 END TEST nvmf_digest_error 00:18:04.491 ************************************ 00:18:04.491 16:29:38 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:04.491 16:29:38 -- host/digest.sh@150 -- # nvmftestfini 00:18:04.491 16:29:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:04.491 16:29:38 -- nvmf/common.sh@117 -- # sync 00:18:04.750 16:29:38 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:04.750 16:29:38 -- nvmf/common.sh@120 -- # set +e 00:18:04.750 16:29:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:04.750 16:29:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:04.750 rmmod nvme_tcp 00:18:04.750 rmmod nvme_fabrics 00:18:04.750 rmmod nvme_keyring 00:18:04.750 16:29:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:04.750 16:29:38 -- nvmf/common.sh@124 -- # set -e 00:18:04.750 16:29:38 -- nvmf/common.sh@125 -- # return 0 00:18:04.750 16:29:38 -- nvmf/common.sh@478 -- # '[' -n 85768 ']' 00:18:04.750 16:29:38 -- nvmf/common.sh@479 -- # killprocess 85768 00:18:04.750 16:29:38 -- common/autotest_common.sh@936 -- # '[' -z 85768 ']' 00:18:04.750 16:29:38 -- common/autotest_common.sh@940 -- # kill -0 85768 00:18:04.750 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (85768) - No such process 00:18:04.750 Process with pid 85768 is not found 00:18:04.750 16:29:38 -- common/autotest_common.sh@963 -- # echo 'Process with pid 85768 is not found' 00:18:04.750 16:29:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:04.750 16:29:38 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:04.750 16:29:38 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:04.750 16:29:38 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:04.750 16:29:38 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:04.750 16:29:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:04.750 16:29:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:04.750 16:29:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:04.750 16:29:38 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:04.750 00:18:04.750 real 0m39.161s 00:18:04.750 user 1m14.037s 00:18:04.750 sys 0m9.640s 00:18:04.750 ************************************ 00:18:04.750 END TEST nvmf_digest 00:18:04.750 ************************************ 00:18:04.750 16:29:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:04.750 16:29:38 -- common/autotest_common.sh@10 -- # set +x 00:18:04.750 16:29:38 -- nvmf/nvmf.sh@108 -- # [[ 1 -eq 1 ]] 00:18:04.750 16:29:38 -- nvmf/nvmf.sh@108 -- # [[ tcp == \t\c\p ]] 00:18:04.750 16:29:38 -- nvmf/nvmf.sh@110 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:18:04.750 16:29:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:04.750 16:29:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:04.750 16:29:38 -- common/autotest_common.sh@10 -- # set +x 00:18:04.750 ************************************ 00:18:04.750 START TEST nvmf_mdns_discovery 00:18:04.750 ************************************ 00:18:04.750 16:29:38 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:18:05.009 * Looking for test storage... 00:18:05.009 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:05.009 16:29:38 -- host/mdns_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:05.009 16:29:38 -- nvmf/common.sh@7 -- # uname -s 00:18:05.009 16:29:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:05.009 16:29:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:05.009 16:29:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:05.009 16:29:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:05.009 16:29:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:05.009 16:29:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:05.009 16:29:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:05.009 16:29:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:05.009 16:29:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:05.009 16:29:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:05.009 16:29:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:18:05.009 16:29:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:18:05.009 16:29:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:05.009 16:29:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:05.009 16:29:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:05.009 16:29:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:05.009 16:29:38 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:05.009 16:29:38 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:05.009 16:29:38 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:05.009 16:29:38 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:05.009 16:29:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.009 16:29:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.009 16:29:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.009 16:29:38 -- paths/export.sh@5 -- # export PATH 00:18:05.009 16:29:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.009 16:29:38 -- nvmf/common.sh@47 -- # : 0 00:18:05.009 16:29:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:05.009 16:29:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:05.009 16:29:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:05.009 16:29:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:05.009 16:29:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:05.009 16:29:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:05.009 16:29:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:05.009 16:29:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:05.009 16:29:38 -- host/mdns_discovery.sh@12 -- # DISCOVERY_FILTER=address 00:18:05.009 16:29:38 -- host/mdns_discovery.sh@13 -- # DISCOVERY_PORT=8009 00:18:05.009 16:29:38 -- host/mdns_discovery.sh@14 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:18:05.009 16:29:38 -- host/mdns_discovery.sh@17 -- # NQN=nqn.2016-06.io.spdk:cnode 00:18:05.009 16:29:38 -- host/mdns_discovery.sh@18 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:18:05.009 16:29:38 -- host/mdns_discovery.sh@20 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:18:05.009 16:29:38 -- host/mdns_discovery.sh@21 -- # HOST_SOCK=/tmp/host.sock 00:18:05.009 16:29:38 -- host/mdns_discovery.sh@23 -- # nvmftestinit 00:18:05.009 16:29:38 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:05.009 16:29:38 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:05.009 16:29:38 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:05.009 16:29:38 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:05.009 16:29:38 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:05.009 16:29:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:05.009 16:29:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:05.009 16:29:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:05.009 16:29:38 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:18:05.009 16:29:38 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:18:05.009 16:29:38 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:18:05.009 16:29:38 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:18:05.009 16:29:38 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:18:05.009 16:29:38 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:18:05.009 16:29:38 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:05.009 16:29:38 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:05.009 16:29:38 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:05.009 16:29:38 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:05.009 16:29:38 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:05.009 16:29:38 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:05.009 16:29:38 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:05.009 16:29:38 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:05.009 16:29:38 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:05.009 16:29:38 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:05.009 16:29:38 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:05.009 16:29:38 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:05.009 16:29:38 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:05.009 16:29:38 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:05.009 Cannot find device "nvmf_tgt_br" 00:18:05.009 16:29:38 -- nvmf/common.sh@155 -- # true 00:18:05.009 16:29:38 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:05.009 Cannot find device "nvmf_tgt_br2" 00:18:05.009 16:29:38 -- nvmf/common.sh@156 -- # true 00:18:05.009 16:29:38 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:05.009 16:29:38 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:05.009 Cannot find device "nvmf_tgt_br" 00:18:05.009 16:29:38 -- nvmf/common.sh@158 -- # true 00:18:05.009 16:29:38 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:05.009 Cannot find device "nvmf_tgt_br2" 00:18:05.009 16:29:38 -- nvmf/common.sh@159 -- # true 00:18:05.009 16:29:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:05.009 16:29:38 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:05.009 16:29:39 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:05.009 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:05.009 16:29:39 -- nvmf/common.sh@162 -- # true 00:18:05.009 16:29:39 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:05.009 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:05.009 16:29:39 -- nvmf/common.sh@163 -- # true 00:18:05.009 16:29:39 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:05.009 16:29:39 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:05.009 16:29:39 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:05.009 16:29:39 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:05.009 16:29:39 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:05.268 16:29:39 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:05.268 16:29:39 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:05.268 16:29:39 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:05.268 16:29:39 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:05.268 16:29:39 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:05.268 16:29:39 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:05.268 16:29:39 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:05.268 16:29:39 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:05.268 16:29:39 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:05.268 16:29:39 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:05.268 16:29:39 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:05.268 16:29:39 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:05.268 16:29:39 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:05.268 16:29:39 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:05.268 16:29:39 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:05.268 16:29:39 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:05.268 16:29:39 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:05.268 16:29:39 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:05.268 16:29:39 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:05.268 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:05.268 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:18:05.268 00:18:05.268 --- 10.0.0.2 ping statistics --- 00:18:05.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.268 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:18:05.268 16:29:39 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:05.268 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:05.268 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:18:05.268 00:18:05.268 --- 10.0.0.3 ping statistics --- 00:18:05.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.268 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:18:05.268 16:29:39 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:05.268 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:05.268 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:18:05.268 00:18:05.268 --- 10.0.0.1 ping statistics --- 00:18:05.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.268 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:18:05.268 16:29:39 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:05.268 16:29:39 -- nvmf/common.sh@422 -- # return 0 00:18:05.268 16:29:39 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:05.268 16:29:39 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:05.268 16:29:39 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:05.268 16:29:39 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:05.268 16:29:39 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:05.268 16:29:39 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:05.268 16:29:39 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:05.268 16:29:39 -- host/mdns_discovery.sh@28 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:05.268 16:29:39 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:05.268 16:29:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:05.268 16:29:39 -- common/autotest_common.sh@10 -- # set +x 00:18:05.268 16:29:39 -- nvmf/common.sh@470 -- # nvmfpid=86391 00:18:05.268 16:29:39 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:05.268 16:29:39 -- nvmf/common.sh@471 -- # waitforlisten 86391 00:18:05.268 16:29:39 -- common/autotest_common.sh@817 -- # '[' -z 86391 ']' 00:18:05.268 16:29:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.268 16:29:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:05.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.269 16:29:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.269 16:29:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:05.269 16:29:39 -- common/autotest_common.sh@10 -- # set +x 00:18:05.269 [2024-04-17 16:29:39.304420] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:18:05.269 [2024-04-17 16:29:39.304512] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:05.527 [2024-04-17 16:29:39.442286] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.527 [2024-04-17 16:29:39.570714] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:05.527 [2024-04-17 16:29:39.570801] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:05.527 [2024-04-17 16:29:39.570818] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:05.527 [2024-04-17 16:29:39.570829] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:05.527 [2024-04-17 16:29:39.570838] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:05.527 [2024-04-17 16:29:39.570869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:06.465 16:29:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:06.465 16:29:40 -- common/autotest_common.sh@850 -- # return 0 00:18:06.465 16:29:40 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:06.465 16:29:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:06.465 16:29:40 -- common/autotest_common.sh@10 -- # set +x 00:18:06.465 16:29:40 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:06.465 16:29:40 -- host/mdns_discovery.sh@30 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:18:06.465 16:29:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:06.465 16:29:40 -- common/autotest_common.sh@10 -- # set +x 00:18:06.465 16:29:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:06.465 16:29:40 -- host/mdns_discovery.sh@31 -- # rpc_cmd framework_start_init 00:18:06.465 16:29:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:06.465 16:29:40 -- common/autotest_common.sh@10 -- # set +x 00:18:06.465 16:29:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:06.465 16:29:40 -- host/mdns_discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:06.465 16:29:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:06.465 16:29:40 -- common/autotest_common.sh@10 -- # set +x 00:18:06.465 [2024-04-17 16:29:40.470691] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:06.465 16:29:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:06.465 16:29:40 -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:18:06.465 16:29:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:06.465 16:29:40 -- common/autotest_common.sh@10 -- # set +x 00:18:06.465 [2024-04-17 16:29:40.478832] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:18:06.465 16:29:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:06.465 16:29:40 -- host/mdns_discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:18:06.465 16:29:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:06.465 16:29:40 -- common/autotest_common.sh@10 -- # set +x 00:18:06.465 null0 00:18:06.465 16:29:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:06.465 16:29:40 -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:18:06.465 16:29:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:06.465 16:29:40 -- common/autotest_common.sh@10 -- # set +x 00:18:06.465 null1 00:18:06.465 16:29:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:06.465 16:29:40 -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null2 1000 512 00:18:06.465 16:29:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:06.465 16:29:40 -- common/autotest_common.sh@10 -- # set +x 00:18:06.465 null2 00:18:06.465 16:29:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:06.465 16:29:40 -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null3 1000 512 00:18:06.465 16:29:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:06.465 16:29:40 -- common/autotest_common.sh@10 -- # set +x 00:18:06.724 null3 00:18:06.724 16:29:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:06.724 16:29:40 -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_wait_for_examine 00:18:06.724 16:29:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:06.724 16:29:40 -- common/autotest_common.sh@10 -- # set +x 00:18:06.724 16:29:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:06.724 16:29:40 -- host/mdns_discovery.sh@47 -- # hostpid=86442 00:18:06.724 16:29:40 -- host/mdns_discovery.sh@48 -- # waitforlisten 86442 /tmp/host.sock 00:18:06.724 16:29:40 -- common/autotest_common.sh@817 -- # '[' -z 86442 ']' 00:18:06.724 16:29:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:18:06.724 16:29:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:06.724 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:06.724 16:29:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:06.724 16:29:40 -- host/mdns_discovery.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:18:06.724 16:29:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:06.724 16:29:40 -- common/autotest_common.sh@10 -- # set +x 00:18:06.724 [2024-04-17 16:29:40.582866] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:18:06.724 [2024-04-17 16:29:40.582981] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86442 ] 00:18:06.724 [2024-04-17 16:29:40.722682] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.982 [2024-04-17 16:29:40.850066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.548 16:29:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:07.548 16:29:41 -- common/autotest_common.sh@850 -- # return 0 00:18:07.548 16:29:41 -- host/mdns_discovery.sh@50 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:18:07.548 16:29:41 -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahi_clientpid;kill $avahipid;' EXIT 00:18:07.548 16:29:41 -- host/mdns_discovery.sh@55 -- # avahi-daemon --kill 00:18:07.806 16:29:41 -- host/mdns_discovery.sh@57 -- # avahipid=86471 00:18:07.806 16:29:41 -- host/mdns_discovery.sh@58 -- # sleep 1 00:18:07.806 16:29:41 -- host/mdns_discovery.sh@56 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:18:07.806 16:29:41 -- host/mdns_discovery.sh@56 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:18:07.806 Process 1005 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:18:07.806 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:18:07.806 Successfully dropped root privileges. 00:18:07.806 avahi-daemon 0.8 starting up. 00:18:07.806 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:18:07.806 Successfully called chroot(). 00:18:07.806 Successfully dropped remaining capabilities. 00:18:07.806 No service file found in /etc/avahi/services. 00:18:07.806 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:18:07.806 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:18:07.806 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:18:07.806 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:18:07.806 Network interface enumeration completed. 00:18:07.806 Registering new address record for fe80::b861:3dff:fef2:9f8a on nvmf_tgt_if2.*. 00:18:07.806 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:18:07.806 Registering new address record for fe80::98ea:d9ff:fed7:19e5 on nvmf_tgt_if.*. 00:18:07.806 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:18:08.740 Server startup complete. Host name is fedora38-cloud-1705279005-2131.local. Local service cookie is 605085733. 00:18:08.740 16:29:42 -- host/mdns_discovery.sh@60 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:18:08.740 16:29:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:08.740 16:29:42 -- common/autotest_common.sh@10 -- # set +x 00:18:08.740 16:29:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:08.740 16:29:42 -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:18:08.740 16:29:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:08.740 16:29:42 -- common/autotest_common.sh@10 -- # set +x 00:18:08.741 16:29:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:08.741 16:29:42 -- host/mdns_discovery.sh@85 -- # notify_id=0 00:18:08.741 16:29:42 -- host/mdns_discovery.sh@91 -- # get_subsystem_names 00:18:08.741 16:29:42 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:08.741 16:29:42 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:18:08.741 16:29:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:08.741 16:29:42 -- host/mdns_discovery.sh@68 -- # sort 00:18:08.741 16:29:42 -- common/autotest_common.sh@10 -- # set +x 00:18:08.741 16:29:42 -- host/mdns_discovery.sh@68 -- # xargs 00:18:08.741 16:29:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:08.741 16:29:42 -- host/mdns_discovery.sh@91 -- # [[ '' == '' ]] 00:18:08.741 16:29:42 -- host/mdns_discovery.sh@92 -- # get_bdev_list 00:18:08.741 16:29:42 -- host/mdns_discovery.sh@64 -- # sort 00:18:08.741 16:29:42 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:18:08.741 16:29:42 -- host/mdns_discovery.sh@64 -- # xargs 00:18:08.741 16:29:42 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:08.741 16:29:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:08.741 16:29:42 -- common/autotest_common.sh@10 -- # set +x 00:18:08.741 16:29:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:08.999 16:29:42 -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:18:08.999 16:29:42 -- host/mdns_discovery.sh@94 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:18:08.999 16:29:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:08.999 16:29:42 -- common/autotest_common.sh@10 -- # set +x 00:18:08.999 16:29:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:08.999 16:29:42 -- host/mdns_discovery.sh@95 -- # get_subsystem_names 00:18:08.999 16:29:42 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:08.999 16:29:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:08.999 16:29:42 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:18:08.999 16:29:42 -- common/autotest_common.sh@10 -- # set +x 00:18:08.999 16:29:42 -- host/mdns_discovery.sh@68 -- # sort 00:18:08.999 16:29:42 -- host/mdns_discovery.sh@68 -- # xargs 00:18:08.999 16:29:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:08.999 16:29:42 -- host/mdns_discovery.sh@95 -- # [[ '' == '' ]] 00:18:08.999 16:29:42 -- host/mdns_discovery.sh@96 -- # get_bdev_list 00:18:08.999 16:29:42 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:08.999 16:29:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:08.999 16:29:42 -- host/mdns_discovery.sh@64 -- # sort 00:18:08.999 16:29:42 -- common/autotest_common.sh@10 -- # set +x 00:18:08.999 16:29:42 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:18:08.999 16:29:42 -- host/mdns_discovery.sh@64 -- # xargs 00:18:08.999 16:29:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:08.999 16:29:42 -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:18:08.999 16:29:42 -- host/mdns_discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:18:08.999 16:29:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:08.999 16:29:42 -- common/autotest_common.sh@10 -- # set +x 00:18:08.999 16:29:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:08.999 16:29:42 -- host/mdns_discovery.sh@99 -- # get_subsystem_names 00:18:08.999 16:29:42 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:08.999 16:29:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:08.999 16:29:42 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:18:08.999 16:29:42 -- host/mdns_discovery.sh@68 -- # sort 00:18:08.999 16:29:42 -- common/autotest_common.sh@10 -- # set +x 00:18:08.999 16:29:42 -- host/mdns_discovery.sh@68 -- # xargs 00:18:08.999 16:29:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:08.999 [2024-04-17 16:29:43.009509] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:18:08.999 16:29:43 -- host/mdns_discovery.sh@99 -- # [[ '' == '' ]] 00:18:08.999 16:29:43 -- host/mdns_discovery.sh@100 -- # get_bdev_list 00:18:08.999 16:29:43 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:08.999 16:29:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:08.999 16:29:43 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:18:08.999 16:29:43 -- common/autotest_common.sh@10 -- # set +x 00:18:08.999 16:29:43 -- host/mdns_discovery.sh@64 -- # xargs 00:18:08.999 16:29:43 -- host/mdns_discovery.sh@64 -- # sort 00:18:08.999 16:29:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.257 16:29:43 -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:18:09.257 16:29:43 -- host/mdns_discovery.sh@104 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:09.257 16:29:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.257 16:29:43 -- common/autotest_common.sh@10 -- # set +x 00:18:09.257 [2024-04-17 16:29:43.071531] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:09.257 16:29:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.257 16:29:43 -- host/mdns_discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:18:09.257 16:29:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.257 16:29:43 -- common/autotest_common.sh@10 -- # set +x 00:18:09.257 16:29:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.257 16:29:43 -- host/mdns_discovery.sh@111 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:18:09.257 16:29:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.257 16:29:43 -- common/autotest_common.sh@10 -- # set +x 00:18:09.257 16:29:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.257 16:29:43 -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:18:09.257 16:29:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.257 16:29:43 -- common/autotest_common.sh@10 -- # set +x 00:18:09.257 16:29:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.257 16:29:43 -- host/mdns_discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:18:09.257 16:29:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.257 16:29:43 -- common/autotest_common.sh@10 -- # set +x 00:18:09.257 16:29:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.257 16:29:43 -- host/mdns_discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:18:09.257 16:29:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.257 16:29:43 -- common/autotest_common.sh@10 -- # set +x 00:18:09.257 [2024-04-17 16:29:43.115512] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:18:09.257 16:29:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.257 16:29:43 -- host/mdns_discovery.sh@120 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:18:09.257 16:29:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.257 16:29:43 -- common/autotest_common.sh@10 -- # set +x 00:18:09.257 [2024-04-17 16:29:43.123439] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:09.257 16:29:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.257 16:29:43 -- host/mdns_discovery.sh@124 -- # avahi_clientpid=86522 00:18:09.257 16:29:43 -- host/mdns_discovery.sh@125 -- # sleep 5 00:18:09.257 16:29:43 -- host/mdns_discovery.sh@123 -- # ip netns exec nvmf_tgt_ns_spdk /usr/bin/avahi-publish --domain=local --service CDC _nvme-disc._tcp 8009 NQN=nqn.2014-08.org.nvmexpress.discovery p=tcp 00:18:10.190 [2024-04-17 16:29:43.909513] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:18:10.190 Established under name 'CDC' 00:18:10.448 [2024-04-17 16:29:44.309560] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:18:10.448 [2024-04-17 16:29:44.309613] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.3) 00:18:10.448 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:18:10.448 cookie is 0 00:18:10.448 is_local: 1 00:18:10.448 our_own: 0 00:18:10.448 wide_area: 0 00:18:10.448 multicast: 1 00:18:10.448 cached: 1 00:18:10.448 [2024-04-17 16:29:44.409534] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:18:10.448 [2024-04-17 16:29:44.409584] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.2) 00:18:10.448 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:18:10.448 cookie is 0 00:18:10.448 is_local: 1 00:18:10.448 our_own: 0 00:18:10.448 wide_area: 0 00:18:10.448 multicast: 1 00:18:10.448 cached: 1 00:18:11.384 [2024-04-17 16:29:45.315089] bdev_nvme.c:6898:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:18:11.384 [2024-04-17 16:29:45.315124] bdev_nvme.c:6978:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:18:11.384 [2024-04-17 16:29:45.315157] bdev_nvme.c:6861:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:11.384 [2024-04-17 16:29:45.401266] bdev_nvme.c:6827:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:18:11.384 [2024-04-17 16:29:45.414865] bdev_nvme.c:6898:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:11.384 [2024-04-17 16:29:45.414895] bdev_nvme.c:6978:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:11.384 [2024-04-17 16:29:45.414913] bdev_nvme.c:6861:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:11.641 [2024-04-17 16:29:45.461266] bdev_nvme.c:6717:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:18:11.641 [2024-04-17 16:29:45.461311] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:18:11.641 [2024-04-17 16:29:45.503593] bdev_nvme.c:6827:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:18:11.642 [2024-04-17 16:29:45.564990] bdev_nvme.c:6717:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:18:11.642 [2024-04-17 16:29:45.565036] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:14.172 16:29:48 -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:18:14.172 16:29:48 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:18:14.172 16:29:48 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:18:14.172 16:29:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:14.172 16:29:48 -- common/autotest_common.sh@10 -- # set +x 00:18:14.172 16:29:48 -- host/mdns_discovery.sh@80 -- # xargs 00:18:14.172 16:29:48 -- host/mdns_discovery.sh@80 -- # sort 00:18:14.172 16:29:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:14.172 16:29:48 -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:18:14.172 16:29:48 -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:18:14.172 16:29:48 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:18:14.172 16:29:48 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:14.172 16:29:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:14.172 16:29:48 -- host/mdns_discovery.sh@76 -- # sort 00:18:14.172 16:29:48 -- common/autotest_common.sh@10 -- # set +x 00:18:14.172 16:29:48 -- host/mdns_discovery.sh@76 -- # xargs 00:18:14.172 16:29:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:14.430 16:29:48 -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:18:14.430 16:29:48 -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:18:14.430 16:29:48 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:14.430 16:29:48 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:18:14.430 16:29:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:14.430 16:29:48 -- common/autotest_common.sh@10 -- # set +x 00:18:14.430 16:29:48 -- host/mdns_discovery.sh@68 -- # sort 00:18:14.430 16:29:48 -- host/mdns_discovery.sh@68 -- # xargs 00:18:14.430 16:29:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:14.430 16:29:48 -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:18:14.430 16:29:48 -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:18:14.430 16:29:48 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:14.430 16:29:48 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:18:14.430 16:29:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:14.430 16:29:48 -- common/autotest_common.sh@10 -- # set +x 00:18:14.430 16:29:48 -- host/mdns_discovery.sh@64 -- # sort 00:18:14.430 16:29:48 -- host/mdns_discovery.sh@64 -- # xargs 00:18:14.430 16:29:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:14.430 16:29:48 -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:18:14.430 16:29:48 -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:18:14.430 16:29:48 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:14.430 16:29:48 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:18:14.430 16:29:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:14.430 16:29:48 -- host/mdns_discovery.sh@72 -- # sort -n 00:18:14.430 16:29:48 -- common/autotest_common.sh@10 -- # set +x 00:18:14.430 16:29:48 -- host/mdns_discovery.sh@72 -- # xargs 00:18:14.431 16:29:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:14.431 16:29:48 -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:18:14.431 16:29:48 -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:18:14.431 16:29:48 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:14.431 16:29:48 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:18:14.431 16:29:48 -- host/mdns_discovery.sh@72 -- # sort -n 00:18:14.431 16:29:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:14.431 16:29:48 -- common/autotest_common.sh@10 -- # set +x 00:18:14.431 16:29:48 -- host/mdns_discovery.sh@72 -- # xargs 00:18:14.431 16:29:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:14.431 16:29:48 -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:18:14.431 16:29:48 -- host/mdns_discovery.sh@133 -- # get_notification_count 00:18:14.431 16:29:48 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:14.431 16:29:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:14.431 16:29:48 -- common/autotest_common.sh@10 -- # set +x 00:18:14.431 16:29:48 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:18:14.431 16:29:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:14.689 16:29:48 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:18:14.689 16:29:48 -- host/mdns_discovery.sh@88 -- # notify_id=2 00:18:14.689 16:29:48 -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:18:14.689 16:29:48 -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:18:14.689 16:29:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:14.689 16:29:48 -- common/autotest_common.sh@10 -- # set +x 00:18:14.689 16:29:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:14.689 16:29:48 -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:18:14.689 16:29:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:14.689 16:29:48 -- common/autotest_common.sh@10 -- # set +x 00:18:14.689 16:29:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:14.689 16:29:48 -- host/mdns_discovery.sh@139 -- # sleep 1 00:18:15.625 16:29:49 -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:18:15.625 16:29:49 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:15.625 16:29:49 -- host/mdns_discovery.sh@64 -- # sort 00:18:15.625 16:29:49 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:18:15.625 16:29:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:15.625 16:29:49 -- common/autotest_common.sh@10 -- # set +x 00:18:15.625 16:29:49 -- host/mdns_discovery.sh@64 -- # xargs 00:18:15.625 16:29:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:15.625 16:29:49 -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:18:15.625 16:29:49 -- host/mdns_discovery.sh@142 -- # get_notification_count 00:18:15.625 16:29:49 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:15.625 16:29:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:15.625 16:29:49 -- common/autotest_common.sh@10 -- # set +x 00:18:15.625 16:29:49 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:18:15.625 16:29:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:15.625 16:29:49 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:18:15.625 16:29:49 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:18:15.625 16:29:49 -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:18:15.625 16:29:49 -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:18:15.625 16:29:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:15.625 16:29:49 -- common/autotest_common.sh@10 -- # set +x 00:18:15.625 [2024-04-17 16:29:49.630745] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:15.625 [2024-04-17 16:29:49.631435] bdev_nvme.c:6880:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:18:15.625 [2024-04-17 16:29:49.631479] bdev_nvme.c:6861:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:15.625 [2024-04-17 16:29:49.631526] bdev_nvme.c:6880:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:18:15.625 [2024-04-17 16:29:49.631542] bdev_nvme.c:6861:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:15.625 16:29:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:15.625 16:29:49 -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:18:15.625 16:29:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:15.625 16:29:49 -- common/autotest_common.sh@10 -- # set +x 00:18:15.625 [2024-04-17 16:29:49.638630] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:15.625 [2024-04-17 16:29:49.639421] bdev_nvme.c:6880:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:18:15.625 [2024-04-17 16:29:49.639498] bdev_nvme.c:6880:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:18:15.625 16:29:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:15.625 16:29:49 -- host/mdns_discovery.sh@149 -- # sleep 1 00:18:15.884 [2024-04-17 16:29:49.770534] bdev_nvme.c:6822:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:18:15.884 [2024-04-17 16:29:49.770746] bdev_nvme.c:6822:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:18:15.884 [2024-04-17 16:29:49.828809] bdev_nvme.c:6717:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:18:15.884 [2024-04-17 16:29:49.828841] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:15.884 [2024-04-17 16:29:49.828849] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:15.884 [2024-04-17 16:29:49.828869] bdev_nvme.c:6861:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:15.884 [2024-04-17 16:29:49.829803] bdev_nvme.c:6717:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:18:15.884 [2024-04-17 16:29:49.829824] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:18:15.884 [2024-04-17 16:29:49.829831] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:18:15.884 [2024-04-17 16:29:49.829848] bdev_nvme.c:6861:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:15.884 [2024-04-17 16:29:49.875857] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:15.884 [2024-04-17 16:29:49.875882] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:15.884 [2024-04-17 16:29:49.875925] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:18:15.884 [2024-04-17 16:29:49.875935] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:18:16.821 16:29:50 -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:18:16.821 16:29:50 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:16.821 16:29:50 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:18:16.821 16:29:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:16.821 16:29:50 -- host/mdns_discovery.sh@68 -- # sort 00:18:16.821 16:29:50 -- common/autotest_common.sh@10 -- # set +x 00:18:16.821 16:29:50 -- host/mdns_discovery.sh@68 -- # xargs 00:18:16.821 16:29:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:16.821 16:29:50 -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:18:16.821 16:29:50 -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:18:16.821 16:29:50 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:16.821 16:29:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:16.821 16:29:50 -- common/autotest_common.sh@10 -- # set +x 00:18:16.821 16:29:50 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:18:16.821 16:29:50 -- host/mdns_discovery.sh@64 -- # sort 00:18:16.821 16:29:50 -- host/mdns_discovery.sh@64 -- # xargs 00:18:16.821 16:29:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:16.821 16:29:50 -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:18:16.821 16:29:50 -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:18:16.821 16:29:50 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:16.821 16:29:50 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:18:16.821 16:29:50 -- host/mdns_discovery.sh@72 -- # sort -n 00:18:16.821 16:29:50 -- host/mdns_discovery.sh@72 -- # xargs 00:18:16.821 16:29:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:16.822 16:29:50 -- common/autotest_common.sh@10 -- # set +x 00:18:16.822 16:29:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:16.822 16:29:50 -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:18:16.822 16:29:50 -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:18:16.822 16:29:50 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:18:16.822 16:29:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:16.822 16:29:50 -- common/autotest_common.sh@10 -- # set +x 00:18:16.822 16:29:50 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:16.822 16:29:50 -- host/mdns_discovery.sh@72 -- # sort -n 00:18:16.822 16:29:50 -- host/mdns_discovery.sh@72 -- # xargs 00:18:16.822 16:29:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:17.082 16:29:50 -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:18:17.082 16:29:50 -- host/mdns_discovery.sh@155 -- # get_notification_count 00:18:17.082 16:29:50 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:18:17.082 16:29:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:17.082 16:29:50 -- common/autotest_common.sh@10 -- # set +x 00:18:17.082 16:29:50 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:18:17.082 16:29:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:17.082 16:29:50 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:18:17.082 16:29:50 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:18:17.082 16:29:50 -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:18:17.082 16:29:50 -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:17.082 16:29:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:17.082 16:29:50 -- common/autotest_common.sh@10 -- # set +x 00:18:17.082 [2024-04-17 16:29:50.947733] bdev_nvme.c:6880:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:18:17.082 [2024-04-17 16:29:50.947818] bdev_nvme.c:6861:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:17.082 [2024-04-17 16:29:50.947867] bdev_nvme.c:6880:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:18:17.082 [2024-04-17 16:29:50.947883] bdev_nvme.c:6861:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:17.082 16:29:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:17.082 16:29:50 -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:18:17.082 16:29:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:17.082 16:29:50 -- common/autotest_common.sh@10 -- # set +x 00:18:17.082 [2024-04-17 16:29:50.953875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.082 [2024-04-17 16:29:50.953910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.082 [2024-04-17 16:29:50.953924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.082 [2024-04-17 16:29:50.953933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.082 [2024-04-17 16:29:50.953943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.082 [2024-04-17 16:29:50.953952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.082 [2024-04-17 16:29:50.953963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.082 [2024-04-17 16:29:50.953971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.082 [2024-04-17 16:29:50.953980] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0c220 is same with the state(5) to be set 00:18:17.082 [2024-04-17 16:29:50.955709] bdev_nvme.c:6880:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:18:17.082 [2024-04-17 16:29:50.955766] bdev_nvme.c:6880:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:18:17.082 [2024-04-17 16:29:50.959517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.082 [2024-04-17 16:29:50.959542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.082 [2024-04-17 16:29:50.959555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.082 [2024-04-17 16:29:50.959564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.082 [2024-04-17 16:29:50.959574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.082 [2024-04-17 16:29:50.959583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.082 [2024-04-17 16:29:50.959594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.082 [2024-04-17 16:29:50.959603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.082 [2024-04-17 16:29:50.959611] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf90d0 is same with the state(5) to be set 00:18:17.082 16:29:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:17.082 16:29:50 -- host/mdns_discovery.sh@162 -- # sleep 1 00:18:17.082 [2024-04-17 16:29:50.963834] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0c220 (9): Bad file descriptor 00:18:17.082 [2024-04-17 16:29:50.969482] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf90d0 (9): Bad file descriptor 00:18:17.082 [2024-04-17 16:29:50.973858] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:17.082 [2024-04-17 16:29:50.974074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.082 [2024-04-17 16:29:50.974140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.082 [2024-04-17 16:29:50.974171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0c220 with addr=10.0.0.2, port=4420 00:18:17.082 [2024-04-17 16:29:50.974185] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0c220 is same with the state(5) to be set 00:18:17.082 [2024-04-17 16:29:50.974205] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0c220 (9): Bad file descriptor 00:18:17.082 [2024-04-17 16:29:50.974233] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:17.082 [2024-04-17 16:29:50.974244] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:17.082 [2024-04-17 16:29:50.974255] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:17.082 [2024-04-17 16:29:50.974272] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:17.082 [2024-04-17 16:29:50.979498] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:18:17.082 [2024-04-17 16:29:50.979607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.082 [2024-04-17 16:29:50.979655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.082 [2024-04-17 16:29:50.979672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf90d0 with addr=10.0.0.3, port=4420 00:18:17.082 [2024-04-17 16:29:50.979683] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf90d0 is same with the state(5) to be set 00:18:17.082 [2024-04-17 16:29:50.979699] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf90d0 (9): Bad file descriptor 00:18:17.082 [2024-04-17 16:29:50.979730] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:18:17.082 [2024-04-17 16:29:50.979742] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:18:17.082 [2024-04-17 16:29:50.979751] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:18:17.082 [2024-04-17 16:29:50.979767] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:17.082 [2024-04-17 16:29:50.983964] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:17.082 [2024-04-17 16:29:50.984076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.082 [2024-04-17 16:29:50.984129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.082 [2024-04-17 16:29:50.984153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0c220 with addr=10.0.0.2, port=4420 00:18:17.082 [2024-04-17 16:29:50.984163] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0c220 is same with the state(5) to be set 00:18:17.082 [2024-04-17 16:29:50.984180] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0c220 (9): Bad file descriptor 00:18:17.082 [2024-04-17 16:29:50.984194] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:17.082 [2024-04-17 16:29:50.984202] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:17.082 [2024-04-17 16:29:50.984211] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:17.082 [2024-04-17 16:29:50.984225] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:17.082 [2024-04-17 16:29:50.989569] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:18:17.082 [2024-04-17 16:29:50.989678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.082 [2024-04-17 16:29:50.989739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.082 [2024-04-17 16:29:50.989755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf90d0 with addr=10.0.0.3, port=4420 00:18:17.082 [2024-04-17 16:29:50.989765] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf90d0 is same with the state(5) to be set 00:18:17.082 [2024-04-17 16:29:50.989781] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf90d0 (9): Bad file descriptor 00:18:17.082 [2024-04-17 16:29:50.989830] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:18:17.082 [2024-04-17 16:29:50.989842] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:18:17.082 [2024-04-17 16:29:50.989851] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:18:17.082 [2024-04-17 16:29:50.989865] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:17.082 [2024-04-17 16:29:50.994031] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:17.083 [2024-04-17 16:29:50.994158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.083 [2024-04-17 16:29:50.994205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.083 [2024-04-17 16:29:50.994233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0c220 with addr=10.0.0.2, port=4420 00:18:17.083 [2024-04-17 16:29:50.994243] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0c220 is same with the state(5) to be set 00:18:17.083 [2024-04-17 16:29:50.994259] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0c220 (9): Bad file descriptor 00:18:17.083 [2024-04-17 16:29:50.994283] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:17.083 [2024-04-17 16:29:50.994293] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:17.083 [2024-04-17 16:29:50.994306] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:17.083 [2024-04-17 16:29:50.994321] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:17.083 [2024-04-17 16:29:50.999633] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:18:17.083 [2024-04-17 16:29:50.999742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.083 [2024-04-17 16:29:50.999799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.083 [2024-04-17 16:29:50.999818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf90d0 with addr=10.0.0.3, port=4420 00:18:17.083 [2024-04-17 16:29:50.999828] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf90d0 is same with the state(5) to be set 00:18:17.083 [2024-04-17 16:29:50.999845] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf90d0 (9): Bad file descriptor 00:18:17.083 [2024-04-17 16:29:50.999878] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:18:17.083 [2024-04-17 16:29:50.999899] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:18:17.083 [2024-04-17 16:29:50.999908] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:18:17.083 [2024-04-17 16:29:50.999922] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:17.083 [2024-04-17 16:29:51.004131] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:17.083 [2024-04-17 16:29:51.004220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.083 [2024-04-17 16:29:51.004267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.083 [2024-04-17 16:29:51.004284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0c220 with addr=10.0.0.2, port=4420 00:18:17.083 [2024-04-17 16:29:51.004294] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0c220 is same with the state(5) to be set 00:18:17.083 [2024-04-17 16:29:51.004310] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0c220 (9): Bad file descriptor 00:18:17.083 [2024-04-17 16:29:51.004325] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:17.083 [2024-04-17 16:29:51.004333] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:17.083 [2024-04-17 16:29:51.004342] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:17.083 [2024-04-17 16:29:51.004356] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:17.083 [2024-04-17 16:29:51.009698] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:18:17.083 [2024-04-17 16:29:51.009793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.083 [2024-04-17 16:29:51.009842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.083 [2024-04-17 16:29:51.009858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf90d0 with addr=10.0.0.3, port=4420 00:18:17.083 [2024-04-17 16:29:51.009868] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf90d0 is same with the state(5) to be set 00:18:17.083 [2024-04-17 16:29:51.009885] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf90d0 (9): Bad file descriptor 00:18:17.083 [2024-04-17 16:29:51.009919] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:18:17.083 [2024-04-17 16:29:51.009929] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:18:17.083 [2024-04-17 16:29:51.009938] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:18:17.083 [2024-04-17 16:29:51.009953] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:17.083 [2024-04-17 16:29:51.014190] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:17.083 [2024-04-17 16:29:51.014269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.083 [2024-04-17 16:29:51.014316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.083 [2024-04-17 16:29:51.014332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0c220 with addr=10.0.0.2, port=4420 00:18:17.083 [2024-04-17 16:29:51.014342] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0c220 is same with the state(5) to be set 00:18:17.083 [2024-04-17 16:29:51.014358] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0c220 (9): Bad file descriptor 00:18:17.083 [2024-04-17 16:29:51.014382] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:17.083 [2024-04-17 16:29:51.014392] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:17.083 [2024-04-17 16:29:51.014401] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:17.083 [2024-04-17 16:29:51.014415] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:17.083 [2024-04-17 16:29:51.019749] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:18:17.083 [2024-04-17 16:29:51.019839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.083 [2024-04-17 16:29:51.019886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.083 [2024-04-17 16:29:51.019913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf90d0 with addr=10.0.0.3, port=4420 00:18:17.083 [2024-04-17 16:29:51.019924] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf90d0 is same with the state(5) to be set 00:18:17.083 [2024-04-17 16:29:51.019940] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf90d0 (9): Bad file descriptor 00:18:17.083 [2024-04-17 16:29:51.019973] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:18:17.083 [2024-04-17 16:29:51.019983] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:18:17.083 [2024-04-17 16:29:51.019991] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:18:17.083 [2024-04-17 16:29:51.020014] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:17.083 [2024-04-17 16:29:51.024245] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:17.083 [2024-04-17 16:29:51.024324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.083 [2024-04-17 16:29:51.024370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.083 [2024-04-17 16:29:51.024387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0c220 with addr=10.0.0.2, port=4420 00:18:17.083 [2024-04-17 16:29:51.024397] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0c220 is same with the state(5) to be set 00:18:17.083 [2024-04-17 16:29:51.024412] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0c220 (9): Bad file descriptor 00:18:17.083 [2024-04-17 16:29:51.024433] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:17.083 [2024-04-17 16:29:51.024442] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:17.083 [2024-04-17 16:29:51.024451] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:17.083 [2024-04-17 16:29:51.024465] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:17.083 [2024-04-17 16:29:51.029811] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:18:17.083 [2024-04-17 16:29:51.029892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.083 [2024-04-17 16:29:51.029939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.083 [2024-04-17 16:29:51.029955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf90d0 with addr=10.0.0.3, port=4420 00:18:17.083 [2024-04-17 16:29:51.029966] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf90d0 is same with the state(5) to be set 00:18:17.083 [2024-04-17 16:29:51.029982] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf90d0 (9): Bad file descriptor 00:18:17.083 [2024-04-17 16:29:51.030021] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:18:17.083 [2024-04-17 16:29:51.030032] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:18:17.083 [2024-04-17 16:29:51.030041] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:18:17.083 [2024-04-17 16:29:51.030055] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:17.083 [2024-04-17 16:29:51.034298] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:17.083 [2024-04-17 16:29:51.034388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.083 [2024-04-17 16:29:51.034435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.083 [2024-04-17 16:29:51.034451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0c220 with addr=10.0.0.2, port=4420 00:18:17.083 [2024-04-17 16:29:51.034461] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0c220 is same with the state(5) to be set 00:18:17.083 [2024-04-17 16:29:51.034477] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0c220 (9): Bad file descriptor 00:18:17.083 [2024-04-17 16:29:51.034500] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:17.083 [2024-04-17 16:29:51.034511] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:17.083 [2024-04-17 16:29:51.034520] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:17.083 [2024-04-17 16:29:51.034541] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:17.083 [2024-04-17 16:29:51.039863] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:18:17.083 [2024-04-17 16:29:51.039943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.083 [2024-04-17 16:29:51.039990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.083 [2024-04-17 16:29:51.040006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf90d0 with addr=10.0.0.3, port=4420 00:18:17.083 [2024-04-17 16:29:51.040017] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf90d0 is same with the state(5) to be set 00:18:17.084 [2024-04-17 16:29:51.040033] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf90d0 (9): Bad file descriptor 00:18:17.084 [2024-04-17 16:29:51.040074] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:18:17.084 [2024-04-17 16:29:51.040085] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:18:17.084 [2024-04-17 16:29:51.040094] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:18:17.084 [2024-04-17 16:29:51.040109] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:17.084 [2024-04-17 16:29:51.044353] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:17.084 [2024-04-17 16:29:51.044447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.084 [2024-04-17 16:29:51.044496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.084 [2024-04-17 16:29:51.044513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0c220 with addr=10.0.0.2, port=4420 00:18:17.084 [2024-04-17 16:29:51.044523] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0c220 is same with the state(5) to be set 00:18:17.084 [2024-04-17 16:29:51.044540] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0c220 (9): Bad file descriptor 00:18:17.084 [2024-04-17 16:29:51.044554] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:17.084 [2024-04-17 16:29:51.044563] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:17.084 [2024-04-17 16:29:51.044572] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:17.084 [2024-04-17 16:29:51.044587] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:17.084 [2024-04-17 16:29:51.049915] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:18:17.084 [2024-04-17 16:29:51.050003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.084 [2024-04-17 16:29:51.050052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.084 [2024-04-17 16:29:51.050069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf90d0 with addr=10.0.0.3, port=4420 00:18:17.084 [2024-04-17 16:29:51.050091] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf90d0 is same with the state(5) to be set 00:18:17.084 [2024-04-17 16:29:51.050116] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf90d0 (9): Bad file descriptor 00:18:17.084 [2024-04-17 16:29:51.050155] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:18:17.084 [2024-04-17 16:29:51.050166] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:18:17.084 [2024-04-17 16:29:51.050175] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:18:17.084 [2024-04-17 16:29:51.050190] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:17.084 [2024-04-17 16:29:51.054412] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:17.084 [2024-04-17 16:29:51.054493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.084 [2024-04-17 16:29:51.054540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.084 [2024-04-17 16:29:51.054556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0c220 with addr=10.0.0.2, port=4420 00:18:17.084 [2024-04-17 16:29:51.054567] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0c220 is same with the state(5) to be set 00:18:17.084 [2024-04-17 16:29:51.054583] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0c220 (9): Bad file descriptor 00:18:17.084 [2024-04-17 16:29:51.054607] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:17.084 [2024-04-17 16:29:51.054618] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:17.084 [2024-04-17 16:29:51.054627] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:17.084 [2024-04-17 16:29:51.054642] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:17.084 [2024-04-17 16:29:51.059971] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:18:17.084 [2024-04-17 16:29:51.060051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.084 [2024-04-17 16:29:51.060097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.084 [2024-04-17 16:29:51.060114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf90d0 with addr=10.0.0.3, port=4420 00:18:17.084 [2024-04-17 16:29:51.060125] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf90d0 is same with the state(5) to be set 00:18:17.084 [2024-04-17 16:29:51.060140] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf90d0 (9): Bad file descriptor 00:18:17.084 [2024-04-17 16:29:51.060180] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:18:17.084 [2024-04-17 16:29:51.060191] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:18:17.084 [2024-04-17 16:29:51.060200] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:18:17.084 [2024-04-17 16:29:51.060214] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:17.084 [2024-04-17 16:29:51.064466] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:17.084 [2024-04-17 16:29:51.064559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.084 [2024-04-17 16:29:51.064605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.084 [2024-04-17 16:29:51.064621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0c220 with addr=10.0.0.2, port=4420 00:18:17.084 [2024-04-17 16:29:51.064632] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0c220 is same with the state(5) to be set 00:18:17.084 [2024-04-17 16:29:51.064647] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0c220 (9): Bad file descriptor 00:18:17.084 [2024-04-17 16:29:51.064662] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:17.084 [2024-04-17 16:29:51.064670] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:17.084 [2024-04-17 16:29:51.064679] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:17.084 [2024-04-17 16:29:51.064693] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:17.084 [2024-04-17 16:29:51.070022] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:18:17.084 [2024-04-17 16:29:51.070113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.084 [2024-04-17 16:29:51.070160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.084 [2024-04-17 16:29:51.070176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf90d0 with addr=10.0.0.3, port=4420 00:18:17.084 [2024-04-17 16:29:51.070186] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf90d0 is same with the state(5) to be set 00:18:17.084 [2024-04-17 16:29:51.070202] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf90d0 (9): Bad file descriptor 00:18:17.084 [2024-04-17 16:29:51.070241] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:18:17.084 [2024-04-17 16:29:51.070251] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:18:17.084 [2024-04-17 16:29:51.070261] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:18:17.084 [2024-04-17 16:29:51.070275] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:17.084 [2024-04-17 16:29:51.074533] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:17.084 [2024-04-17 16:29:51.074612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.084 [2024-04-17 16:29:51.074657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.084 [2024-04-17 16:29:51.074673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0c220 with addr=10.0.0.2, port=4420 00:18:17.084 [2024-04-17 16:29:51.074684] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0c220 is same with the state(5) to be set 00:18:17.084 [2024-04-17 16:29:51.074699] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0c220 (9): Bad file descriptor 00:18:17.084 [2024-04-17 16:29:51.074723] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:17.084 [2024-04-17 16:29:51.074733] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:17.084 [2024-04-17 16:29:51.074742] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:17.084 [2024-04-17 16:29:51.074756] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:17.084 [2024-04-17 16:29:51.080075] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:18:17.084 [2024-04-17 16:29:51.080161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.084 [2024-04-17 16:29:51.080206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.084 [2024-04-17 16:29:51.080223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf90d0 with addr=10.0.0.3, port=4420 00:18:17.084 [2024-04-17 16:29:51.080233] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf90d0 is same with the state(5) to be set 00:18:17.084 [2024-04-17 16:29:51.080248] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf90d0 (9): Bad file descriptor 00:18:17.084 [2024-04-17 16:29:51.080289] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:18:17.084 [2024-04-17 16:29:51.080300] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:18:17.084 [2024-04-17 16:29:51.080308] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:18:17.084 [2024-04-17 16:29:51.080322] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:17.084 [2024-04-17 16:29:51.084586] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:17.084 [2024-04-17 16:29:51.084668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.084 [2024-04-17 16:29:51.084714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.084 [2024-04-17 16:29:51.084731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0c220 with addr=10.0.0.2, port=4420 00:18:17.084 [2024-04-17 16:29:51.084741] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0c220 is same with the state(5) to be set 00:18:17.084 [2024-04-17 16:29:51.084756] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0c220 (9): Bad file descriptor 00:18:17.084 [2024-04-17 16:29:51.084782] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:17.084 [2024-04-17 16:29:51.084793] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:17.084 [2024-04-17 16:29:51.084801] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:17.084 [2024-04-17 16:29:51.084817] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:17.085 [2024-04-17 16:29:51.088290] bdev_nvme.c:6685:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:18:17.085 [2024-04-17 16:29:51.088321] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:17.085 [2024-04-17 16:29:51.088355] bdev_nvme.c:6861:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:17.085 [2024-04-17 16:29:51.088392] bdev_nvme.c:6685:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:18:17.085 [2024-04-17 16:29:51.088408] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:18:17.085 [2024-04-17 16:29:51.088429] bdev_nvme.c:6861:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:17.343 [2024-04-17 16:29:51.174410] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:17.343 [2024-04-17 16:29:51.174489] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:18:17.963 16:29:51 -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:18:17.964 16:29:51 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:17.964 16:29:51 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:18:17.964 16:29:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:17.964 16:29:51 -- common/autotest_common.sh@10 -- # set +x 00:18:17.964 16:29:51 -- host/mdns_discovery.sh@68 -- # xargs 00:18:17.964 16:29:51 -- host/mdns_discovery.sh@68 -- # sort 00:18:17.964 16:29:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:18.222 16:29:52 -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:18:18.222 16:29:52 -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:18:18.222 16:29:52 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:18:18.222 16:29:52 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:18.222 16:29:52 -- host/mdns_discovery.sh@64 -- # xargs 00:18:18.222 16:29:52 -- host/mdns_discovery.sh@64 -- # sort 00:18:18.222 16:29:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:18.222 16:29:52 -- common/autotest_common.sh@10 -- # set +x 00:18:18.222 16:29:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:18.222 16:29:52 -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:18:18.222 16:29:52 -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:18:18.222 16:29:52 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:18:18.222 16:29:52 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:18.222 16:29:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:18.222 16:29:52 -- host/mdns_discovery.sh@72 -- # sort -n 00:18:18.222 16:29:52 -- common/autotest_common.sh@10 -- # set +x 00:18:18.222 16:29:52 -- host/mdns_discovery.sh@72 -- # xargs 00:18:18.222 16:29:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:18.222 16:29:52 -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:18:18.222 16:29:52 -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:18:18.222 16:29:52 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:18:18.222 16:29:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:18.222 16:29:52 -- common/autotest_common.sh@10 -- # set +x 00:18:18.222 16:29:52 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:18.222 16:29:52 -- host/mdns_discovery.sh@72 -- # sort -n 00:18:18.222 16:29:52 -- host/mdns_discovery.sh@72 -- # xargs 00:18:18.222 16:29:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:18.222 16:29:52 -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:18:18.222 16:29:52 -- host/mdns_discovery.sh@168 -- # get_notification_count 00:18:18.222 16:29:52 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:18:18.222 16:29:52 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:18:18.222 16:29:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:18.222 16:29:52 -- common/autotest_common.sh@10 -- # set +x 00:18:18.222 16:29:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:18.222 16:29:52 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:18:18.222 16:29:52 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:18:18.222 16:29:52 -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:18:18.222 16:29:52 -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:18:18.222 16:29:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:18.222 16:29:52 -- common/autotest_common.sh@10 -- # set +x 00:18:18.481 16:29:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:18.482 16:29:52 -- host/mdns_discovery.sh@172 -- # sleep 1 00:18:18.482 [2024-04-17 16:29:52.309549] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:18:19.417 16:29:53 -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:18:19.417 16:29:53 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:18:19.417 16:29:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:19.417 16:29:53 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:18:19.417 16:29:53 -- common/autotest_common.sh@10 -- # set +x 00:18:19.417 16:29:53 -- host/mdns_discovery.sh@80 -- # sort 00:18:19.417 16:29:53 -- host/mdns_discovery.sh@80 -- # xargs 00:18:19.417 16:29:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:19.417 16:29:53 -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:18:19.417 16:29:53 -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:18:19.417 16:29:53 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:19.417 16:29:53 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:18:19.417 16:29:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:19.417 16:29:53 -- common/autotest_common.sh@10 -- # set +x 00:18:19.417 16:29:53 -- host/mdns_discovery.sh@68 -- # sort 00:18:19.417 16:29:53 -- host/mdns_discovery.sh@68 -- # xargs 00:18:19.417 16:29:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:19.417 16:29:53 -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:18:19.417 16:29:53 -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:18:19.417 16:29:53 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:19.417 16:29:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:19.417 16:29:53 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:18:19.417 16:29:53 -- common/autotest_common.sh@10 -- # set +x 00:18:19.417 16:29:53 -- host/mdns_discovery.sh@64 -- # sort 00:18:19.417 16:29:53 -- host/mdns_discovery.sh@64 -- # xargs 00:18:19.418 16:29:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:19.418 16:29:53 -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:18:19.418 16:29:53 -- host/mdns_discovery.sh@177 -- # get_notification_count 00:18:19.418 16:29:53 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:18:19.418 16:29:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:19.418 16:29:53 -- common/autotest_common.sh@10 -- # set +x 00:18:19.418 16:29:53 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:18:19.676 16:29:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:19.676 16:29:53 -- host/mdns_discovery.sh@87 -- # notification_count=4 00:18:19.676 16:29:53 -- host/mdns_discovery.sh@88 -- # notify_id=8 00:18:19.676 16:29:53 -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:18:19.676 16:29:53 -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:18:19.676 16:29:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:19.676 16:29:53 -- common/autotest_common.sh@10 -- # set +x 00:18:19.676 16:29:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:19.676 16:29:53 -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:18:19.676 16:29:53 -- common/autotest_common.sh@638 -- # local es=0 00:18:19.676 16:29:53 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:18:19.676 16:29:53 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:18:19.676 16:29:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:19.676 16:29:53 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:18:19.676 16:29:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:19.676 16:29:53 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:18:19.676 16:29:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:19.676 16:29:53 -- common/autotest_common.sh@10 -- # set +x 00:18:19.676 [2024-04-17 16:29:53.515447] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:18:19.676 2024/04/17 16:29:53 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:18:19.676 request: 00:18:19.676 { 00:18:19.676 "method": "bdev_nvme_start_mdns_discovery", 00:18:19.676 "params": { 00:18:19.676 "name": "mdns", 00:18:19.676 "svcname": "_nvme-disc._http", 00:18:19.676 "hostnqn": "nqn.2021-12.io.spdk:test" 00:18:19.676 } 00:18:19.676 } 00:18:19.676 Got JSON-RPC error response 00:18:19.676 GoRPCClient: error on JSON-RPC call 00:18:19.676 16:29:53 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:18:19.676 16:29:53 -- common/autotest_common.sh@641 -- # es=1 00:18:19.676 16:29:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:19.676 16:29:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:19.676 16:29:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:19.676 16:29:53 -- host/mdns_discovery.sh@183 -- # sleep 5 00:18:19.935 [2024-04-17 16:29:53.904128] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:18:20.193 [2024-04-17 16:29:54.004121] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:18:20.193 [2024-04-17 16:29:54.104123] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:18:20.193 [2024-04-17 16:29:54.104166] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.3) 00:18:20.193 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:18:20.193 cookie is 0 00:18:20.193 is_local: 1 00:18:20.193 our_own: 0 00:18:20.193 wide_area: 0 00:18:20.193 multicast: 1 00:18:20.193 cached: 1 00:18:20.193 [2024-04-17 16:29:54.204131] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:18:20.193 [2024-04-17 16:29:54.204181] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.2) 00:18:20.193 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:18:20.193 cookie is 0 00:18:20.193 is_local: 1 00:18:20.193 our_own: 0 00:18:20.193 wide_area: 0 00:18:20.193 multicast: 1 00:18:20.193 cached: 1 00:18:21.125 [2024-04-17 16:29:55.117271] bdev_nvme.c:6898:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:18:21.125 [2024-04-17 16:29:55.117316] bdev_nvme.c:6978:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:18:21.125 [2024-04-17 16:29:55.117337] bdev_nvme.c:6861:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:21.383 [2024-04-17 16:29:55.203389] bdev_nvme.c:6827:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:18:21.383 [2024-04-17 16:29:55.216940] bdev_nvme.c:6898:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:21.383 [2024-04-17 16:29:55.216981] bdev_nvme.c:6978:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:21.383 [2024-04-17 16:29:55.217000] bdev_nvme.c:6861:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:21.383 [2024-04-17 16:29:55.273369] bdev_nvme.c:6717:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:18:21.383 [2024-04-17 16:29:55.273413] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:18:21.383 [2024-04-17 16:29:55.303551] bdev_nvme.c:6827:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:18:21.383 [2024-04-17 16:29:55.363044] bdev_nvme.c:6717:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:18:21.383 [2024-04-17 16:29:55.363094] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:24.670 16:29:58 -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:18:24.670 16:29:58 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:18:24.670 16:29:58 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:18:24.670 16:29:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:24.670 16:29:58 -- host/mdns_discovery.sh@80 -- # sort 00:18:24.670 16:29:58 -- common/autotest_common.sh@10 -- # set +x 00:18:24.670 16:29:58 -- host/mdns_discovery.sh@80 -- # xargs 00:18:24.670 16:29:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:24.670 16:29:58 -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:18:24.670 16:29:58 -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:18:24.670 16:29:58 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:24.670 16:29:58 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:18:24.670 16:29:58 -- host/mdns_discovery.sh@76 -- # sort 00:18:24.670 16:29:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:24.670 16:29:58 -- host/mdns_discovery.sh@76 -- # xargs 00:18:24.670 16:29:58 -- common/autotest_common.sh@10 -- # set +x 00:18:24.670 16:29:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:24.670 16:29:58 -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:18:24.670 16:29:58 -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:18:24.670 16:29:58 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:18:24.670 16:29:58 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:24.670 16:29:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:24.670 16:29:58 -- common/autotest_common.sh@10 -- # set +x 00:18:24.670 16:29:58 -- host/mdns_discovery.sh@64 -- # sort 00:18:24.670 16:29:58 -- host/mdns_discovery.sh@64 -- # xargs 00:18:24.670 16:29:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:24.670 16:29:58 -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:18:24.670 16:29:58 -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:18:24.670 16:29:58 -- common/autotest_common.sh@638 -- # local es=0 00:18:24.670 16:29:58 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:18:24.670 16:29:58 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:18:24.670 16:29:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:24.670 16:29:58 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:18:24.670 16:29:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:24.670 16:29:58 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:18:24.670 16:29:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:24.670 16:29:58 -- common/autotest_common.sh@10 -- # set +x 00:18:24.670 [2024-04-17 16:29:58.700276] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:18:24.670 2024/04/17 16:29:58 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:18:24.670 request: 00:18:24.670 { 00:18:24.670 "method": "bdev_nvme_start_mdns_discovery", 00:18:24.670 "params": { 00:18:24.670 "name": "cdc", 00:18:24.670 "svcname": "_nvme-disc._tcp", 00:18:24.670 "hostnqn": "nqn.2021-12.io.spdk:test" 00:18:24.670 } 00:18:24.670 } 00:18:24.670 Got JSON-RPC error response 00:18:24.670 GoRPCClient: error on JSON-RPC call 00:18:24.670 16:29:58 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:18:24.670 16:29:58 -- common/autotest_common.sh@641 -- # es=1 00:18:24.670 16:29:58 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:24.670 16:29:58 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:24.670 16:29:58 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:24.670 16:29:58 -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:18:24.670 16:29:58 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:24.670 16:29:58 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:18:24.670 16:29:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:24.670 16:29:58 -- host/mdns_discovery.sh@76 -- # sort 00:18:24.670 16:29:58 -- common/autotest_common.sh@10 -- # set +x 00:18:24.670 16:29:58 -- host/mdns_discovery.sh@76 -- # xargs 00:18:24.929 16:29:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:24.929 16:29:58 -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:18:24.929 16:29:58 -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:18:24.929 16:29:58 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:24.929 16:29:58 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:18:24.929 16:29:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:24.929 16:29:58 -- common/autotest_common.sh@10 -- # set +x 00:18:24.929 16:29:58 -- host/mdns_discovery.sh@64 -- # sort 00:18:24.929 16:29:58 -- host/mdns_discovery.sh@64 -- # xargs 00:18:24.929 16:29:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:24.929 16:29:58 -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:18:24.929 16:29:58 -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:18:24.929 16:29:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:24.929 16:29:58 -- common/autotest_common.sh@10 -- # set +x 00:18:24.929 16:29:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:24.929 16:29:58 -- host/mdns_discovery.sh@195 -- # trap - SIGINT SIGTERM EXIT 00:18:24.929 16:29:58 -- host/mdns_discovery.sh@197 -- # kill 86442 00:18:24.929 16:29:58 -- host/mdns_discovery.sh@200 -- # wait 86442 00:18:24.929 [2024-04-17 16:29:58.932561] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:18:25.188 16:29:59 -- host/mdns_discovery.sh@201 -- # kill 86522 00:18:25.188 Got SIGTERM, quitting. 00:18:25.188 16:29:59 -- host/mdns_discovery.sh@202 -- # kill 86471 00:18:25.188 16:29:59 -- host/mdns_discovery.sh@203 -- # nvmftestfini 00:18:25.188 16:29:59 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:25.188 16:29:59 -- nvmf/common.sh@117 -- # sync 00:18:25.188 Got SIGTERM, quitting. 00:18:25.188 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:18:25.188 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:18:25.188 avahi-daemon 0.8 exiting. 00:18:25.188 16:29:59 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:25.188 16:29:59 -- nvmf/common.sh@120 -- # set +e 00:18:25.188 16:29:59 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:25.188 16:29:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:25.188 rmmod nvme_tcp 00:18:25.188 rmmod nvme_fabrics 00:18:25.188 rmmod nvme_keyring 00:18:25.188 16:29:59 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:25.188 16:29:59 -- nvmf/common.sh@124 -- # set -e 00:18:25.188 16:29:59 -- nvmf/common.sh@125 -- # return 0 00:18:25.188 16:29:59 -- nvmf/common.sh@478 -- # '[' -n 86391 ']' 00:18:25.188 16:29:59 -- nvmf/common.sh@479 -- # killprocess 86391 00:18:25.188 16:29:59 -- common/autotest_common.sh@936 -- # '[' -z 86391 ']' 00:18:25.188 16:29:59 -- common/autotest_common.sh@940 -- # kill -0 86391 00:18:25.188 16:29:59 -- common/autotest_common.sh@941 -- # uname 00:18:25.188 16:29:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:25.188 16:29:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86391 00:18:25.188 16:29:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:25.188 16:29:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:25.188 killing process with pid 86391 00:18:25.188 16:29:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86391' 00:18:25.188 16:29:59 -- common/autotest_common.sh@955 -- # kill 86391 00:18:25.188 16:29:59 -- common/autotest_common.sh@960 -- # wait 86391 00:18:25.446 16:29:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:25.446 16:29:59 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:25.446 16:29:59 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:25.446 16:29:59 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:25.446 16:29:59 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:25.446 16:29:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:25.446 16:29:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:25.446 16:29:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.727 16:29:59 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:25.727 ************************************ 00:18:25.727 END TEST nvmf_mdns_discovery 00:18:25.727 ************************************ 00:18:25.727 00:18:25.727 real 0m20.753s 00:18:25.727 user 0m40.458s 00:18:25.727 sys 0m2.072s 00:18:25.727 16:29:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:25.727 16:29:59 -- common/autotest_common.sh@10 -- # set +x 00:18:25.727 16:29:59 -- nvmf/nvmf.sh@113 -- # [[ 1 -eq 1 ]] 00:18:25.727 16:29:59 -- nvmf/nvmf.sh@114 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:25.727 16:29:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:25.727 16:29:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:25.727 16:29:59 -- common/autotest_common.sh@10 -- # set +x 00:18:25.727 ************************************ 00:18:25.727 START TEST nvmf_multipath 00:18:25.727 ************************************ 00:18:25.727 16:29:59 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:25.727 * Looking for test storage... 00:18:25.727 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:25.727 16:29:59 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:25.727 16:29:59 -- nvmf/common.sh@7 -- # uname -s 00:18:25.727 16:29:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:25.727 16:29:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:25.727 16:29:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:25.727 16:29:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:25.727 16:29:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:25.727 16:29:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:25.727 16:29:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:25.727 16:29:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:25.727 16:29:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:25.727 16:29:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:25.727 16:29:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:18:25.727 16:29:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:18:25.727 16:29:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:25.727 16:29:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:25.727 16:29:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:25.727 16:29:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:25.727 16:29:59 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:25.727 16:29:59 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:25.727 16:29:59 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:25.727 16:29:59 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:25.727 16:29:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.727 16:29:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.727 16:29:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.727 16:29:59 -- paths/export.sh@5 -- # export PATH 00:18:25.727 16:29:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.727 16:29:59 -- nvmf/common.sh@47 -- # : 0 00:18:25.727 16:29:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:25.727 16:29:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:25.727 16:29:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:25.727 16:29:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:25.727 16:29:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:25.727 16:29:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:25.727 16:29:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:25.727 16:29:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:25.727 16:29:59 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:25.727 16:29:59 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:25.727 16:29:59 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:25.727 16:29:59 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:25.727 16:29:59 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:25.727 16:29:59 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:25.727 16:29:59 -- host/multipath.sh@30 -- # nvmftestinit 00:18:25.727 16:29:59 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:25.727 16:29:59 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:25.727 16:29:59 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:25.727 16:29:59 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:25.727 16:29:59 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:25.727 16:29:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:25.728 16:29:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:25.728 16:29:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.728 16:29:59 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:18:25.728 16:29:59 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:18:25.728 16:29:59 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:18:25.728 16:29:59 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:18:25.728 16:29:59 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:18:25.728 16:29:59 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:18:25.728 16:29:59 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:25.728 16:29:59 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:25.728 16:29:59 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:25.728 16:29:59 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:25.728 16:29:59 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:25.728 16:29:59 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:25.728 16:29:59 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:25.728 16:29:59 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:25.728 16:29:59 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:25.728 16:29:59 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:25.728 16:29:59 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:25.728 16:29:59 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:25.728 16:29:59 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:25.986 16:29:59 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:25.986 Cannot find device "nvmf_tgt_br" 00:18:25.986 16:29:59 -- nvmf/common.sh@155 -- # true 00:18:25.986 16:29:59 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:25.986 Cannot find device "nvmf_tgt_br2" 00:18:25.986 16:29:59 -- nvmf/common.sh@156 -- # true 00:18:25.986 16:29:59 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:25.986 16:29:59 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:25.986 Cannot find device "nvmf_tgt_br" 00:18:25.986 16:29:59 -- nvmf/common.sh@158 -- # true 00:18:25.986 16:29:59 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:25.986 Cannot find device "nvmf_tgt_br2" 00:18:25.986 16:29:59 -- nvmf/common.sh@159 -- # true 00:18:25.986 16:29:59 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:25.986 16:29:59 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:25.986 16:29:59 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:25.986 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:25.986 16:29:59 -- nvmf/common.sh@162 -- # true 00:18:25.986 16:29:59 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:25.986 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:25.986 16:29:59 -- nvmf/common.sh@163 -- # true 00:18:25.986 16:29:59 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:25.986 16:29:59 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:25.986 16:29:59 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:25.986 16:29:59 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:25.986 16:29:59 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:25.986 16:29:59 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:25.986 16:29:59 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:25.986 16:29:59 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:25.986 16:29:59 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:25.986 16:29:59 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:25.986 16:29:59 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:25.986 16:29:59 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:25.986 16:29:59 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:25.986 16:29:59 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:25.986 16:29:59 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:25.986 16:29:59 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:25.986 16:29:59 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:25.986 16:29:59 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:25.986 16:29:59 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:25.986 16:30:00 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:25.986 16:30:00 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:26.246 16:30:00 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:26.246 16:30:00 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:26.246 16:30:00 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:26.246 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:26.246 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:18:26.246 00:18:26.246 --- 10.0.0.2 ping statistics --- 00:18:26.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.246 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:18:26.246 16:30:00 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:26.246 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:26.246 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:18:26.246 00:18:26.246 --- 10.0.0.3 ping statistics --- 00:18:26.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.246 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:18:26.246 16:30:00 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:26.246 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:26.246 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:18:26.246 00:18:26.246 --- 10.0.0.1 ping statistics --- 00:18:26.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.246 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:18:26.246 16:30:00 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:26.246 16:30:00 -- nvmf/common.sh@422 -- # return 0 00:18:26.246 16:30:00 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:26.246 16:30:00 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:26.246 16:30:00 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:26.246 16:30:00 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:26.246 16:30:00 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:26.246 16:30:00 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:26.246 16:30:00 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:26.246 16:30:00 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:18:26.246 16:30:00 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:26.246 16:30:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:26.246 16:30:00 -- common/autotest_common.sh@10 -- # set +x 00:18:26.246 16:30:00 -- nvmf/common.sh@470 -- # nvmfpid=87042 00:18:26.246 16:30:00 -- nvmf/common.sh@471 -- # waitforlisten 87042 00:18:26.246 16:30:00 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:26.246 16:30:00 -- common/autotest_common.sh@817 -- # '[' -z 87042 ']' 00:18:26.246 16:30:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.246 16:30:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:26.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.246 16:30:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.246 16:30:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:26.246 16:30:00 -- common/autotest_common.sh@10 -- # set +x 00:18:26.246 [2024-04-17 16:30:00.157405] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:18:26.246 [2024-04-17 16:30:00.157506] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:26.505 [2024-04-17 16:30:00.298374] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:26.505 [2024-04-17 16:30:00.434644] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:26.505 [2024-04-17 16:30:00.434713] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:26.505 [2024-04-17 16:30:00.434727] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:26.505 [2024-04-17 16:30:00.434739] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:26.505 [2024-04-17 16:30:00.434748] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:26.505 [2024-04-17 16:30:00.435110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:26.505 [2024-04-17 16:30:00.435137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.440 16:30:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:27.440 16:30:01 -- common/autotest_common.sh@850 -- # return 0 00:18:27.440 16:30:01 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:27.440 16:30:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:27.440 16:30:01 -- common/autotest_common.sh@10 -- # set +x 00:18:27.440 16:30:01 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:27.440 16:30:01 -- host/multipath.sh@33 -- # nvmfapp_pid=87042 00:18:27.440 16:30:01 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:27.699 [2024-04-17 16:30:01.499187] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:27.699 16:30:01 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:27.957 Malloc0 00:18:27.957 16:30:01 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:28.215 16:30:02 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:28.472 16:30:02 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:28.730 [2024-04-17 16:30:02.580523] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:28.730 16:30:02 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:28.989 [2024-04-17 16:30:02.820614] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:28.989 16:30:02 -- host/multipath.sh@44 -- # bdevperf_pid=87140 00:18:28.989 16:30:02 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:28.989 16:30:02 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:28.989 16:30:02 -- host/multipath.sh@47 -- # waitforlisten 87140 /var/tmp/bdevperf.sock 00:18:28.989 16:30:02 -- common/autotest_common.sh@817 -- # '[' -z 87140 ']' 00:18:28.989 16:30:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:28.989 16:30:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:28.989 16:30:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:28.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:28.989 16:30:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:28.989 16:30:02 -- common/autotest_common.sh@10 -- # set +x 00:18:29.924 16:30:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:29.924 16:30:03 -- common/autotest_common.sh@850 -- # return 0 00:18:29.924 16:30:03 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:30.183 16:30:04 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:18:30.750 Nvme0n1 00:18:30.750 16:30:04 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:31.008 Nvme0n1 00:18:31.008 16:30:04 -- host/multipath.sh@78 -- # sleep 1 00:18:31.008 16:30:04 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:31.943 16:30:05 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:18:31.943 16:30:05 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:32.201 16:30:06 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:32.459 16:30:06 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:18:32.459 16:30:06 -- host/multipath.sh@65 -- # dtrace_pid=87233 00:18:32.459 16:30:06 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87042 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:32.459 16:30:06 -- host/multipath.sh@66 -- # sleep 6 00:18:39.020 16:30:12 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:39.020 16:30:12 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:39.020 16:30:12 -- host/multipath.sh@67 -- # active_port=4421 00:18:39.020 16:30:12 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:39.020 Attaching 4 probes... 00:18:39.020 @path[10.0.0.2, 4421]: 16815 00:18:39.020 @path[10.0.0.2, 4421]: 16557 00:18:39.020 @path[10.0.0.2, 4421]: 16828 00:18:39.020 @path[10.0.0.2, 4421]: 16838 00:18:39.020 @path[10.0.0.2, 4421]: 17040 00:18:39.020 16:30:12 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:39.020 16:30:12 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:39.020 16:30:12 -- host/multipath.sh@69 -- # sed -n 1p 00:18:39.020 16:30:12 -- host/multipath.sh@69 -- # port=4421 00:18:39.020 16:30:12 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:39.020 16:30:12 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:39.020 16:30:12 -- host/multipath.sh@72 -- # kill 87233 00:18:39.020 16:30:12 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:39.020 16:30:12 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:18:39.020 16:30:12 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:39.020 16:30:13 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:39.278 16:30:13 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:18:39.278 16:30:13 -- host/multipath.sh@65 -- # dtrace_pid=87369 00:18:39.278 16:30:13 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87042 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:39.278 16:30:13 -- host/multipath.sh@66 -- # sleep 6 00:18:45.854 16:30:19 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:45.854 16:30:19 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:45.855 16:30:19 -- host/multipath.sh@67 -- # active_port=4420 00:18:45.855 16:30:19 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:45.855 Attaching 4 probes... 00:18:45.855 @path[10.0.0.2, 4420]: 16966 00:18:45.855 @path[10.0.0.2, 4420]: 17164 00:18:45.855 @path[10.0.0.2, 4420]: 16888 00:18:45.855 @path[10.0.0.2, 4420]: 16646 00:18:45.855 @path[10.0.0.2, 4420]: 17016 00:18:45.855 16:30:19 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:45.855 16:30:19 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:45.855 16:30:19 -- host/multipath.sh@69 -- # sed -n 1p 00:18:45.855 16:30:19 -- host/multipath.sh@69 -- # port=4420 00:18:45.855 16:30:19 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:45.855 16:30:19 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:45.855 16:30:19 -- host/multipath.sh@72 -- # kill 87369 00:18:45.855 16:30:19 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:45.855 16:30:19 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:18:45.855 16:30:19 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:45.855 16:30:19 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:46.112 16:30:20 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:18:46.112 16:30:20 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87042 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:46.112 16:30:20 -- host/multipath.sh@65 -- # dtrace_pid=87499 00:18:46.112 16:30:20 -- host/multipath.sh@66 -- # sleep 6 00:18:52.687 16:30:26 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:52.687 16:30:26 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:52.687 16:30:26 -- host/multipath.sh@67 -- # active_port=4421 00:18:52.687 16:30:26 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:52.687 Attaching 4 probes... 00:18:52.687 @path[10.0.0.2, 4421]: 12548 00:18:52.687 @path[10.0.0.2, 4421]: 13436 00:18:52.687 @path[10.0.0.2, 4421]: 13603 00:18:52.687 @path[10.0.0.2, 4421]: 16742 00:18:52.687 @path[10.0.0.2, 4421]: 16930 00:18:52.687 16:30:26 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:52.687 16:30:26 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:52.687 16:30:26 -- host/multipath.sh@69 -- # sed -n 1p 00:18:52.687 16:30:26 -- host/multipath.sh@69 -- # port=4421 00:18:52.687 16:30:26 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:52.687 16:30:26 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:52.687 16:30:26 -- host/multipath.sh@72 -- # kill 87499 00:18:52.687 16:30:26 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:52.687 16:30:26 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:18:52.687 16:30:26 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:52.687 16:30:26 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:52.945 16:30:26 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:18:52.945 16:30:26 -- host/multipath.sh@65 -- # dtrace_pid=87630 00:18:52.945 16:30:26 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87042 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:52.945 16:30:26 -- host/multipath.sh@66 -- # sleep 6 00:18:59.512 16:30:32 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:59.512 16:30:32 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:18:59.512 16:30:33 -- host/multipath.sh@67 -- # active_port= 00:18:59.512 16:30:33 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:59.512 Attaching 4 probes... 00:18:59.512 00:18:59.512 00:18:59.512 00:18:59.512 00:18:59.512 00:18:59.512 16:30:33 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:59.512 16:30:33 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:59.512 16:30:33 -- host/multipath.sh@69 -- # sed -n 1p 00:18:59.512 16:30:33 -- host/multipath.sh@69 -- # port= 00:18:59.512 16:30:33 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:18:59.512 16:30:33 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:18:59.512 16:30:33 -- host/multipath.sh@72 -- # kill 87630 00:18:59.512 16:30:33 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:59.512 16:30:33 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:18:59.513 16:30:33 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:59.513 16:30:33 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:59.772 16:30:33 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:18:59.772 16:30:33 -- host/multipath.sh@65 -- # dtrace_pid=87767 00:18:59.772 16:30:33 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87042 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:59.772 16:30:33 -- host/multipath.sh@66 -- # sleep 6 00:19:06.334 16:30:39 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:06.334 16:30:39 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:06.334 16:30:40 -- host/multipath.sh@67 -- # active_port=4421 00:19:06.334 16:30:40 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:06.334 Attaching 4 probes... 00:19:06.334 @path[10.0.0.2, 4421]: 16415 00:19:06.334 @path[10.0.0.2, 4421]: 16538 00:19:06.334 @path[10.0.0.2, 4421]: 16648 00:19:06.334 @path[10.0.0.2, 4421]: 16562 00:19:06.334 @path[10.0.0.2, 4421]: 16417 00:19:06.334 16:30:40 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:06.334 16:30:40 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:06.334 16:30:40 -- host/multipath.sh@69 -- # sed -n 1p 00:19:06.334 16:30:40 -- host/multipath.sh@69 -- # port=4421 00:19:06.334 16:30:40 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:06.334 16:30:40 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:06.334 16:30:40 -- host/multipath.sh@72 -- # kill 87767 00:19:06.334 16:30:40 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:06.334 16:30:40 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:06.334 [2024-04-17 16:30:40.287327] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914ae0 is same with the state(5) to be set 00:19:06.334 [2024-04-17 16:30:40.287384] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914ae0 is same with the state(5) to be set 00:19:06.334 [2024-04-17 16:30:40.287396] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914ae0 is same with the state(5) to be set 00:19:06.334 [2024-04-17 16:30:40.287405] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914ae0 is same with the state(5) to be set 00:19:06.334 [2024-04-17 16:30:40.287414] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914ae0 is same with the state(5) to be set 00:19:06.334 [2024-04-17 16:30:40.287423] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914ae0 is same with the state(5) to be set 00:19:06.334 [2024-04-17 16:30:40.287433] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914ae0 is same with the state(5) to be set 00:19:06.334 [2024-04-17 16:30:40.287443] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914ae0 is same with the state(5) to be set 00:19:06.334 [2024-04-17 16:30:40.287452] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914ae0 is same with the state(5) to be set 00:19:06.334 [2024-04-17 16:30:40.287460] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914ae0 is same with the state(5) to be set 00:19:06.334 [2024-04-17 16:30:40.287468] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914ae0 is same with the state(5) to be set 00:19:06.334 [2024-04-17 16:30:40.287478] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914ae0 is same with the state(5) to be set 00:19:06.334 [2024-04-17 16:30:40.287487] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914ae0 is same with the state(5) to be set 00:19:06.334 [2024-04-17 16:30:40.287495] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914ae0 is same with the state(5) to be set 00:19:06.334 [2024-04-17 16:30:40.287504] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914ae0 is same with the state(5) to be set 00:19:06.334 [2024-04-17 16:30:40.287512] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x914ae0 is same with the state(5) to be set 00:19:06.334 16:30:40 -- host/multipath.sh@101 -- # sleep 1 00:19:07.772 16:30:41 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:19:07.772 16:30:41 -- host/multipath.sh@65 -- # dtrace_pid=87898 00:19:07.773 16:30:41 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87042 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:07.773 16:30:41 -- host/multipath.sh@66 -- # sleep 6 00:19:14.332 16:30:47 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:14.332 16:30:47 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:19:14.332 16:30:47 -- host/multipath.sh@67 -- # active_port=4420 00:19:14.332 16:30:47 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:14.332 Attaching 4 probes... 00:19:14.332 @path[10.0.0.2, 4420]: 15715 00:19:14.332 @path[10.0.0.2, 4420]: 16358 00:19:14.332 @path[10.0.0.2, 4420]: 16601 00:19:14.332 @path[10.0.0.2, 4420]: 16503 00:19:14.332 @path[10.0.0.2, 4420]: 16237 00:19:14.332 16:30:47 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:14.332 16:30:47 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:14.332 16:30:47 -- host/multipath.sh@69 -- # sed -n 1p 00:19:14.332 16:30:47 -- host/multipath.sh@69 -- # port=4420 00:19:14.332 16:30:47 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:19:14.332 16:30:47 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:19:14.332 16:30:47 -- host/multipath.sh@72 -- # kill 87898 00:19:14.332 16:30:47 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:14.332 16:30:47 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:14.332 [2024-04-17 16:30:47.923466] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:14.332 16:30:47 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:14.332 16:30:48 -- host/multipath.sh@111 -- # sleep 6 00:19:20.892 16:30:54 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:19:20.892 16:30:54 -- host/multipath.sh@65 -- # dtrace_pid=88095 00:19:20.892 16:30:54 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87042 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:20.892 16:30:54 -- host/multipath.sh@66 -- # sleep 6 00:19:27.458 16:31:00 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:27.458 16:31:00 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:27.458 16:31:00 -- host/multipath.sh@67 -- # active_port=4421 00:19:27.458 16:31:00 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:27.458 Attaching 4 probes... 00:19:27.458 @path[10.0.0.2, 4421]: 15206 00:19:27.458 @path[10.0.0.2, 4421]: 14414 00:19:27.458 @path[10.0.0.2, 4421]: 15517 00:19:27.458 @path[10.0.0.2, 4421]: 16464 00:19:27.458 @path[10.0.0.2, 4421]: 16607 00:19:27.458 16:31:00 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:27.458 16:31:00 -- host/multipath.sh@69 -- # sed -n 1p 00:19:27.458 16:31:00 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:27.458 16:31:00 -- host/multipath.sh@69 -- # port=4421 00:19:27.458 16:31:00 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:27.458 16:31:00 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:27.458 16:31:00 -- host/multipath.sh@72 -- # kill 88095 00:19:27.458 16:31:00 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:27.458 16:31:00 -- host/multipath.sh@114 -- # killprocess 87140 00:19:27.458 16:31:00 -- common/autotest_common.sh@936 -- # '[' -z 87140 ']' 00:19:27.458 16:31:00 -- common/autotest_common.sh@940 -- # kill -0 87140 00:19:27.458 16:31:00 -- common/autotest_common.sh@941 -- # uname 00:19:27.458 16:31:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:27.458 16:31:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87140 00:19:27.458 killing process with pid 87140 00:19:27.458 16:31:00 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:27.458 16:31:00 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:27.458 16:31:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87140' 00:19:27.458 16:31:00 -- common/autotest_common.sh@955 -- # kill 87140 00:19:27.458 16:31:00 -- common/autotest_common.sh@960 -- # wait 87140 00:19:27.458 Connection closed with partial response: 00:19:27.458 00:19:27.458 00:19:27.458 16:31:00 -- host/multipath.sh@116 -- # wait 87140 00:19:27.458 16:31:00 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:27.458 [2024-04-17 16:30:02.888964] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:19:27.458 [2024-04-17 16:30:02.889088] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87140 ] 00:19:27.458 [2024-04-17 16:30:03.024125] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.458 [2024-04-17 16:30:03.147831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:27.458 Running I/O for 90 seconds... 00:19:27.458 [2024-04-17 16:30:13.270294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:43440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.458 [2024-04-17 16:30:13.270379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:27.458 [2024-04-17 16:30:13.270451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:43448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.458 [2024-04-17 16:30:13.270474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:27.458 [2024-04-17 16:30:13.270498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:43456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.458 [2024-04-17 16:30:13.270515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:27.458 [2024-04-17 16:30:13.270538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:43464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.458 [2024-04-17 16:30:13.270554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:27.458 [2024-04-17 16:30:13.270577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:43472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.458 [2024-04-17 16:30:13.270593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:27.458 [2024-04-17 16:30:13.270615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:43480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.458 [2024-04-17 16:30:13.270631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:27.458 [2024-04-17 16:30:13.270654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:43488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.458 [2024-04-17 16:30:13.270670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:27.458 [2024-04-17 16:30:13.270693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.458 [2024-04-17 16:30:13.270709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:27.458 [2024-04-17 16:30:13.272558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:43504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.458 [2024-04-17 16:30:13.272588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:27.458 [2024-04-17 16:30:13.272617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:42736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.458 [2024-04-17 16:30:13.272636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:27.458 [2024-04-17 16:30:13.272658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:42744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.458 [2024-04-17 16:30:13.272695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:27.458 [2024-04-17 16:30:13.272721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:42752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.458 [2024-04-17 16:30:13.272738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:27.458 [2024-04-17 16:30:13.272760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:42760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.458 [2024-04-17 16:30:13.272791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:27.458 [2024-04-17 16:30:13.272816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:42768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.458 [2024-04-17 16:30:13.272832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:27.458 [2024-04-17 16:30:13.272855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:42776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.458 [2024-04-17 16:30:13.272871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:27.458 [2024-04-17 16:30:13.272893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:42784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.458 [2024-04-17 16:30:13.272908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:27.458 [2024-04-17 16:30:13.272931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:42792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.458 [2024-04-17 16:30:13.272947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:27.458 [2024-04-17 16:30:13.272969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:43512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.458 [2024-04-17 16:30:13.272984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:27.458 [2024-04-17 16:30:13.273006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:43520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.458 [2024-04-17 16:30:13.273022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:27.458 [2024-04-17 16:30:13.273044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.458 [2024-04-17 16:30:13.273060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:27.458 [2024-04-17 16:30:13.273082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:43536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.458 [2024-04-17 16:30:13.273097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:27.458 [2024-04-17 16:30:13.273119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:43544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.458 [2024-04-17 16:30:13.273135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:27.459 [2024-04-17 16:30:13.273157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:43552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.459 [2024-04-17 16:30:13.273173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:27.459 [2024-04-17 16:30:13.273205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:42800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.459 [2024-04-17 16:30:13.273223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:27.459 [2024-04-17 16:30:13.273246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:42808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.459 [2024-04-17 16:30:13.273263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:27.459 [2024-04-17 16:30:13.273285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:42816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.459 [2024-04-17 16:30:13.273301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:27.459 [2024-04-17 16:30:13.273324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:42824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.459 [2024-04-17 16:30:13.273340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:27.459 [2024-04-17 16:30:13.273362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.459 [2024-04-17 16:30:13.273379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:27.459 [2024-04-17 16:30:13.273401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:42840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.459 [2024-04-17 16:30:13.273417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:27.459 [2024-04-17 16:30:13.273439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:42848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.459 [2024-04-17 16:30:13.273455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:27.459 [2024-04-17 16:30:13.273478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:42856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.459 [2024-04-17 16:30:13.273494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:27.459 [2024-04-17 16:30:13.273516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:42864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.459 [2024-04-17 16:30:13.273532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:27.459 [2024-04-17 16:30:13.273555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:42872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.459 [2024-04-17 16:30:13.273570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:27.459 [2024-04-17 16:30:13.273593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:42880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.459 [2024-04-17 16:30:13.273609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:27.459 [2024-04-17 16:30:13.273631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:42888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.459 [2024-04-17 16:30:13.273647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:27.459 [2024-04-17 16:30:13.273677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:42896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.459 [2024-04-17 16:30:13.273694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:27.459 [2024-04-17 16:30:13.273716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:42904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.459 [2024-04-17 16:30:13.273732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:27.459 [2024-04-17 16:30:13.273755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:42912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.459 [2024-04-17 16:30:13.273781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:27.459 [2024-04-17 16:30:13.273808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:42920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.459 [2024-04-17 16:30:13.273824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:27.459 [2024-04-17 16:30:13.273848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:42928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.459 [2024-04-17 16:30:13.273864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:27.459 [2024-04-17 16:30:13.273887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:42936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.459 [2024-04-17 16:30:13.273904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:27.459 [2024-04-17 16:30:13.273927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:42944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.459 [2024-04-17 16:30:13.273943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:27.459 [2024-04-17 16:30:13.273966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:42952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.459 [2024-04-17 16:30:13.273982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:27.459 [2024-04-17 16:30:13.274004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:42960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.459 [2024-04-17 16:30:13.274020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:27.459 [2024-04-17 16:30:13.274043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:42968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.459 [2024-04-17 16:30:13.274059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:27.459 [2024-04-17 16:30:13.274093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:42976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.459 [2024-04-17 16:30:13.274112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:27.459 [2024-04-17 16:30:13.274135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:42984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.459 [2024-04-17 16:30:13.274160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:27.459 [2024-04-17 16:30:13.274183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:42992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.459 [2024-04-17 16:30:13.274207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:27.459 [2024-04-17 16:30:13.274231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:43000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.459 [2024-04-17 16:30:13.274252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:27.459 [2024-04-17 16:30:13.274274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:43008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.459 [2024-04-17 16:30:13.274290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:27.459 [2024-04-17 16:30:13.274313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:43016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.459 [2024-04-17 16:30:13.274329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:27.459 [2024-04-17 16:30:13.274351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:43024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.459 [2024-04-17 16:30:13.274367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:27.459 [2024-04-17 16:30:13.274390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:43032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.459 [2024-04-17 16:30:13.274406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:27.459 [2024-04-17 16:30:13.274429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:43040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.459 [2024-04-17 16:30:13.274445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:27.459 [2024-04-17 16:30:13.274475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:43048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.459 [2024-04-17 16:30:13.274491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:27.459 [2024-04-17 16:30:13.274514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:43056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.459 [2024-04-17 16:30:13.274530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:27.459 [2024-04-17 16:30:13.274555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:43064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.459 [2024-04-17 16:30:13.274571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:27.459 [2024-04-17 16:30:13.274595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:43072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.459 [2024-04-17 16:30:13.274611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:27.459 [2024-04-17 16:30:13.274634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:43080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.460 [2024-04-17 16:30:13.274650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:27.460 [2024-04-17 16:30:13.274672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:43088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.460 [2024-04-17 16:30:13.274694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:27.460 [2024-04-17 16:30:13.274717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.460 [2024-04-17 16:30:13.274734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:27.460 [2024-04-17 16:30:13.274756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:43104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.460 [2024-04-17 16:30:13.274783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:27.460 [2024-04-17 16:30:13.274810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:43112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.460 [2024-04-17 16:30:13.274826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:27.460 [2024-04-17 16:30:13.274849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:43120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.460 [2024-04-17 16:30:13.274865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:27.460 [2024-04-17 16:30:13.274888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:43128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.460 [2024-04-17 16:30:13.274904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:27.460 [2024-04-17 16:30:13.274926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:43136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.460 [2024-04-17 16:30:13.274942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:27.460 [2024-04-17 16:30:13.274965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:43144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.460 [2024-04-17 16:30:13.274981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:27.460 [2024-04-17 16:30:13.275003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:43152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.460 [2024-04-17 16:30:13.275019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:27.460 [2024-04-17 16:30:13.275042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:43160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.460 [2024-04-17 16:30:13.275058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:27.460 [2024-04-17 16:30:13.275080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:43168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.460 [2024-04-17 16:30:13.275105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:27.460 [2024-04-17 16:30:13.275129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:43176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.460 [2024-04-17 16:30:13.275146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:27.460 [2024-04-17 16:30:13.276056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:43560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.460 [2024-04-17 16:30:13.276085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:27.460 [2024-04-17 16:30:13.276125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:43568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.460 [2024-04-17 16:30:13.276143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:27.460 [2024-04-17 16:30:13.276166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.460 [2024-04-17 16:30:13.276183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:27.460 [2024-04-17 16:30:13.276205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:43584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.460 [2024-04-17 16:30:13.276222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:27.460 [2024-04-17 16:30:13.276245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:43592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.460 [2024-04-17 16:30:13.276261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:27.460 [2024-04-17 16:30:13.276283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:43600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.460 [2024-04-17 16:30:13.276299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:27.460 [2024-04-17 16:30:13.276322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:43608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.460 [2024-04-17 16:30:13.276339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:27.460 [2024-04-17 16:30:13.276362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:43616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.460 [2024-04-17 16:30:13.276378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:27.460 [2024-04-17 16:30:13.276400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:43624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.460 [2024-04-17 16:30:13.276416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:27.460 [2024-04-17 16:30:13.276439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:43632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.460 [2024-04-17 16:30:13.276455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:27.460 [2024-04-17 16:30:13.276478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:43640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.460 [2024-04-17 16:30:13.276494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:27.460 [2024-04-17 16:30:13.276517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:43648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.460 [2024-04-17 16:30:13.276533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:27.460 [2024-04-17 16:30:13.276555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:43656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.460 [2024-04-17 16:30:13.276571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:27.460 [2024-04-17 16:30:13.276602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:43664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.460 [2024-04-17 16:30:13.276618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:27.460 [2024-04-17 16:30:13.276641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:43672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.460 [2024-04-17 16:30:13.276662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:27.460 [2024-04-17 16:30:13.276685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:43680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.460 [2024-04-17 16:30:13.276702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.460 [2024-04-17 16:30:13.276725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:43688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.460 [2024-04-17 16:30:13.276741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.460 [2024-04-17 16:30:13.276764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:43184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.460 [2024-04-17 16:30:13.276798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:27.460 [2024-04-17 16:30:13.276824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:43192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.460 [2024-04-17 16:30:13.276840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:27.460 [2024-04-17 16:30:13.276863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:43200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.460 [2024-04-17 16:30:13.276879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:27.460 [2024-04-17 16:30:13.276901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:43208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.460 [2024-04-17 16:30:13.276917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:27.460 [2024-04-17 16:30:13.276940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:43216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.460 [2024-04-17 16:30:13.276956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:27.460 [2024-04-17 16:30:13.276979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:43224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.460 [2024-04-17 16:30:13.276995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:27.460 [2024-04-17 16:30:13.277018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:43232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.460 [2024-04-17 16:30:13.277034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:27.460 [2024-04-17 16:30:13.277056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:43240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.461 [2024-04-17 16:30:13.277073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:27.461 [2024-04-17 16:30:13.277095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:43696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.461 [2024-04-17 16:30:13.277120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:27.461 [2024-04-17 16:30:13.277144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:43704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.461 [2024-04-17 16:30:13.277160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:27.461 [2024-04-17 16:30:13.277183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:43712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.461 [2024-04-17 16:30:13.277199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:27.461 [2024-04-17 16:30:13.277221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:43720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.461 [2024-04-17 16:30:13.277237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:27.461 [2024-04-17 16:30:13.277260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:43728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.461 [2024-04-17 16:30:13.277276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:27.461 [2024-04-17 16:30:13.277299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:43736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.461 [2024-04-17 16:30:13.277320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:27.461 [2024-04-17 16:30:13.277343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:43744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.461 [2024-04-17 16:30:13.277359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:27.461 [2024-04-17 16:30:13.277382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:43752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.461 [2024-04-17 16:30:13.277399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:27.461 [2024-04-17 16:30:13.277421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:43248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.461 [2024-04-17 16:30:13.277437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:27.461 [2024-04-17 16:30:13.277460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:43256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.461 [2024-04-17 16:30:13.277476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:27.461 [2024-04-17 16:30:13.277499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:43264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.461 [2024-04-17 16:30:13.277515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:27.461 [2024-04-17 16:30:13.277537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:43272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.461 [2024-04-17 16:30:13.277553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:27.461 [2024-04-17 16:30:13.277576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:43280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.461 [2024-04-17 16:30:13.277598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:27.461 [2024-04-17 16:30:13.277622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:43288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.461 [2024-04-17 16:30:13.277638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:27.461 [2024-04-17 16:30:13.277661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:43296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.461 [2024-04-17 16:30:13.277676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:27.461 [2024-04-17 16:30:13.277699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:43304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.461 [2024-04-17 16:30:13.277715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:27.461 [2024-04-17 16:30:13.277737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:43312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.461 [2024-04-17 16:30:13.277753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:27.461 [2024-04-17 16:30:13.277787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:43320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.461 [2024-04-17 16:30:13.277807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:27.461 [2024-04-17 16:30:13.277830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:43328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.461 [2024-04-17 16:30:13.277846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:27.461 [2024-04-17 16:30:13.277869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:43336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.461 [2024-04-17 16:30:13.277885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:27.461 [2024-04-17 16:30:13.277907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:43344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.461 [2024-04-17 16:30:13.277923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:27.461 [2024-04-17 16:30:13.277945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:43352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.461 [2024-04-17 16:30:13.277962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:27.461 [2024-04-17 16:30:13.277984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:43360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.461 [2024-04-17 16:30:13.278001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:27.461 [2024-04-17 16:30:13.278024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:43368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.461 [2024-04-17 16:30:13.278040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:27.461 [2024-04-17 16:30:13.278063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:43376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.461 [2024-04-17 16:30:13.278089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:27.461 [2024-04-17 16:30:13.278127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:43384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.461 [2024-04-17 16:30:13.278144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:27.461 [2024-04-17 16:30:13.278167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:43392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.461 [2024-04-17 16:30:13.278183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:27.461 [2024-04-17 16:30:13.278205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:43400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.461 [2024-04-17 16:30:13.278221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:27.461 [2024-04-17 16:30:13.278244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:43408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.461 [2024-04-17 16:30:13.278260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:27.461 [2024-04-17 16:30:13.278282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:43416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.461 [2024-04-17 16:30:13.278298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:27.461 [2024-04-17 16:30:13.278320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:43424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.461 [2024-04-17 16:30:13.278336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:27.461 [2024-04-17 16:30:13.278359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:43432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.461 [2024-04-17 16:30:13.278375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:27.461 [2024-04-17 16:30:19.761788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:90856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.461 [2024-04-17 16:30:19.761866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:27.461 [2024-04-17 16:30:19.761926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:90864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.461 [2024-04-17 16:30:19.761948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:27.461 [2024-04-17 16:30:19.761973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:90872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.461 [2024-04-17 16:30:19.761990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:27.461 [2024-04-17 16:30:19.762013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:90880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.461 [2024-04-17 16:30:19.762029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:27.461 [2024-04-17 16:30:19.762052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:90888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.462 [2024-04-17 16:30:19.762069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:27.462 [2024-04-17 16:30:19.762149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:90896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.462 [2024-04-17 16:30:19.762167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:27.462 [2024-04-17 16:30:19.762190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:90904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.462 [2024-04-17 16:30:19.762207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:27.462 [2024-04-17 16:30:19.762230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:90912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.462 [2024-04-17 16:30:19.762246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:27.462 [2024-04-17 16:30:19.762336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:90920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.462 [2024-04-17 16:30:19.762361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:27.462 [2024-04-17 16:30:19.762388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:90928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.462 [2024-04-17 16:30:19.762406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:27.462 [2024-04-17 16:30:19.762430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:90936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.462 [2024-04-17 16:30:19.762446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:27.462 [2024-04-17 16:30:19.762469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:90944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.462 [2024-04-17 16:30:19.762485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:27.462 [2024-04-17 16:30:19.762508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:90952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.462 [2024-04-17 16:30:19.762526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:27.462 [2024-04-17 16:30:19.762549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:90960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.462 [2024-04-17 16:30:19.762565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:27.462 [2024-04-17 16:30:19.762589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:90968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.462 [2024-04-17 16:30:19.762614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:27.462 [2024-04-17 16:30:19.762638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:90976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.462 [2024-04-17 16:30:19.762655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:27.462 [2024-04-17 16:30:19.762710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.462 [2024-04-17 16:30:19.762732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:27.462 [2024-04-17 16:30:19.762770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:90992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.462 [2024-04-17 16:30:19.762807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:27.462 [2024-04-17 16:30:19.762834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:91000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.462 [2024-04-17 16:30:19.762851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:27.462 [2024-04-17 16:30:19.762875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:91008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.462 [2024-04-17 16:30:19.762891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:27.462 [2024-04-17 16:30:19.762916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:91016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.462 [2024-04-17 16:30:19.762933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:27.462 [2024-04-17 16:30:19.762957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:91024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.462 [2024-04-17 16:30:19.762973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:27.462 [2024-04-17 16:30:19.762996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:91032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.462 [2024-04-17 16:30:19.763013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:27.462 [2024-04-17 16:30:19.763037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:91040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.462 [2024-04-17 16:30:19.763053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:27.462 [2024-04-17 16:30:19.763121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:91048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.462 [2024-04-17 16:30:19.763144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:27.462 [2024-04-17 16:30:19.763172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:91056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.462 [2024-04-17 16:30:19.763190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:27.462 [2024-04-17 16:30:19.763215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:91064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.462 [2024-04-17 16:30:19.763231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:27.462 [2024-04-17 16:30:19.763255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:91072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.462 [2024-04-17 16:30:19.763272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:27.462 [2024-04-17 16:30:19.763296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:91080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.462 [2024-04-17 16:30:19.763313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:27.462 [2024-04-17 16:30:19.763337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:91088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.462 [2024-04-17 16:30:19.763364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:27.462 [2024-04-17 16:30:19.763389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:91096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.462 [2024-04-17 16:30:19.763406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:27.462 [2024-04-17 16:30:19.763445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:91104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.462 [2024-04-17 16:30:19.763461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:27.462 [2024-04-17 16:30:19.764389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:91112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.462 [2024-04-17 16:30:19.764416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:27.462 [2024-04-17 16:30:19.764445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:91120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.462 [2024-04-17 16:30:19.764463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:27.462 [2024-04-17 16:30:19.764487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:91128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.463 [2024-04-17 16:30:19.764504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:27.463 [2024-04-17 16:30:19.764530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:91136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.463 [2024-04-17 16:30:19.764546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:27.463 [2024-04-17 16:30:19.764571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:91144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.463 [2024-04-17 16:30:19.764587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:27.463 [2024-04-17 16:30:19.764611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:91152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.463 [2024-04-17 16:30:19.764628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:27.463 [2024-04-17 16:30:19.764653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:91160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.463 [2024-04-17 16:30:19.764669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:27.463 [2024-04-17 16:30:19.764693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:90224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.463 [2024-04-17 16:30:19.764710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:27.463 [2024-04-17 16:30:19.764735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:90232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.463 [2024-04-17 16:30:19.764752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:27.463 [2024-04-17 16:30:19.764790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:90240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.463 [2024-04-17 16:30:19.764819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:27.463 [2024-04-17 16:30:19.764846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:90248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.463 [2024-04-17 16:30:19.764863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:27.463 [2024-04-17 16:30:19.764888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:90256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.463 [2024-04-17 16:30:19.764905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:27.463 [2024-04-17 16:30:19.764929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:90264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.463 [2024-04-17 16:30:19.764945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:27.463 [2024-04-17 16:30:19.764970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:90272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.463 [2024-04-17 16:30:19.764987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:27.463 [2024-04-17 16:30:19.765011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:90280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.463 [2024-04-17 16:30:19.765028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:27.463 [2024-04-17 16:30:19.765052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:90288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.463 [2024-04-17 16:30:19.765068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:27.463 [2024-04-17 16:30:19.765093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.463 [2024-04-17 16:30:19.765110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:27.463 [2024-04-17 16:30:19.765134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.463 [2024-04-17 16:30:19.765150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:27.463 [2024-04-17 16:30:19.765175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:90312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.463 [2024-04-17 16:30:19.765191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:27.463 [2024-04-17 16:30:19.765216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:90320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.463 [2024-04-17 16:30:19.765232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:27.463 [2024-04-17 16:30:19.765256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:90328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.463 [2024-04-17 16:30:19.765273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:27.463 [2024-04-17 16:30:19.765297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:90336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.463 [2024-04-17 16:30:19.765314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:27.463 [2024-04-17 16:30:19.765348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:90344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.463 [2024-04-17 16:30:19.765365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:27.463 [2024-04-17 16:30:19.765390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:90352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.463 [2024-04-17 16:30:19.765406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:27.463 [2024-04-17 16:30:19.765432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:90360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.463 [2024-04-17 16:30:19.765451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:27.463 [2024-04-17 16:30:19.765476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:90368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.463 [2024-04-17 16:30:19.765492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:27.463 [2024-04-17 16:30:19.765516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:90376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.463 [2024-04-17 16:30:19.765533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:27.463 [2024-04-17 16:30:19.765558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.463 [2024-04-17 16:30:19.765575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.463 [2024-04-17 16:30:19.765600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:90392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.463 [2024-04-17 16:30:19.765616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.463 [2024-04-17 16:30:19.765647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:90400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.463 [2024-04-17 16:30:19.765668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:27.463 [2024-04-17 16:30:19.765692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:90408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.463 [2024-04-17 16:30:19.765709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:27.463 [2024-04-17 16:30:19.765733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:90416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.463 [2024-04-17 16:30:19.765749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:27.463 [2024-04-17 16:30:19.765786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:90424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.463 [2024-04-17 16:30:19.765805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:27.463 [2024-04-17 16:30:19.765831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:90432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.463 [2024-04-17 16:30:19.765848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:27.463 [2024-04-17 16:30:19.765977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:90440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.463 [2024-04-17 16:30:19.766001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:27.463 [2024-04-17 16:30:19.766032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:90448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.463 [2024-04-17 16:30:19.766050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:27.463 [2024-04-17 16:30:19.766087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:90456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.463 [2024-04-17 16:30:19.766106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:27.463 [2024-04-17 16:30:19.766134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:90464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.463 [2024-04-17 16:30:19.766150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:27.463 [2024-04-17 16:30:19.766178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:91168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.463 [2024-04-17 16:30:19.766194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:27.463 [2024-04-17 16:30:19.766221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:90472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.463 [2024-04-17 16:30:19.766238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:27.463 [2024-04-17 16:30:19.766266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:90480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.464 [2024-04-17 16:30:19.766283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:27.464 [2024-04-17 16:30:19.766310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:90488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.464 [2024-04-17 16:30:19.766326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:27.464 [2024-04-17 16:30:19.766353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:90496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.464 [2024-04-17 16:30:19.766370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:27.464 [2024-04-17 16:30:19.766396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:90504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.464 [2024-04-17 16:30:19.766413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:27.464 [2024-04-17 16:30:19.766439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.464 [2024-04-17 16:30:19.766455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:27.464 [2024-04-17 16:30:19.766482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:90520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.464 [2024-04-17 16:30:19.766509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:27.464 [2024-04-17 16:30:19.766536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:90528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.464 [2024-04-17 16:30:19.766562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:27.464 [2024-04-17 16:30:19.766591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:90536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.464 [2024-04-17 16:30:19.766607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:27.464 [2024-04-17 16:30:19.766634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:90544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.464 [2024-04-17 16:30:19.766651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:27.464 [2024-04-17 16:30:19.766677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:90552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.464 [2024-04-17 16:30:19.766694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:27.464 [2024-04-17 16:30:19.766720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.464 [2024-04-17 16:30:19.766737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:27.464 [2024-04-17 16:30:19.766763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:90568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.464 [2024-04-17 16:30:19.766795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:27.464 [2024-04-17 16:30:19.766824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:90576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.464 [2024-04-17 16:30:19.766842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:27.464 [2024-04-17 16:30:19.766869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:90584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.464 [2024-04-17 16:30:19.766885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:27.464 [2024-04-17 16:30:19.766911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:90592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.464 [2024-04-17 16:30:19.766928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:27.464 [2024-04-17 16:30:19.766955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:90600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.464 [2024-04-17 16:30:19.766971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:27.464 [2024-04-17 16:30:19.766998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:90608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.464 [2024-04-17 16:30:19.767015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:27.464 [2024-04-17 16:30:19.767042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:90616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.464 [2024-04-17 16:30:19.767059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:27.464 [2024-04-17 16:30:19.767086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:90624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.464 [2024-04-17 16:30:19.767110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:27.464 [2024-04-17 16:30:19.767138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:90632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.464 [2024-04-17 16:30:19.767155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:27.464 [2024-04-17 16:30:19.767181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:90640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.464 [2024-04-17 16:30:19.767198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:27.464 [2024-04-17 16:30:19.767225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:90648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.464 [2024-04-17 16:30:19.767241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:27.464 [2024-04-17 16:30:19.767268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:90656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.464 [2024-04-17 16:30:19.767284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:27.464 [2024-04-17 16:30:19.767311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:91176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.464 [2024-04-17 16:30:19.767328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:27.464 [2024-04-17 16:30:19.767354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:91184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.464 [2024-04-17 16:30:19.767371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:27.464 [2024-04-17 16:30:19.767397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:91192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.464 [2024-04-17 16:30:19.767414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:27.464 [2024-04-17 16:30:19.767441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:91200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.464 [2024-04-17 16:30:19.767457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:27.464 [2024-04-17 16:30:19.767484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:91208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.464 [2024-04-17 16:30:19.767501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:27.464 [2024-04-17 16:30:19.767527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:91216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.464 [2024-04-17 16:30:19.767544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:27.464 [2024-04-17 16:30:19.767571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:91224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.464 [2024-04-17 16:30:19.767587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:27.464 [2024-04-17 16:30:19.767614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:91232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.464 [2024-04-17 16:30:19.767630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:27.464 [2024-04-17 16:30:19.767683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.464 [2024-04-17 16:30:19.767702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:27.464 [2024-04-17 16:30:19.767728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.464 [2024-04-17 16:30:19.767745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:27.464 [2024-04-17 16:30:19.767783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:90672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.464 [2024-04-17 16:30:19.767803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:27.464 [2024-04-17 16:30:19.767831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:90680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.464 [2024-04-17 16:30:19.767847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:27.464 [2024-04-17 16:30:19.767874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:90688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.464 [2024-04-17 16:30:19.767890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:27.464 [2024-04-17 16:30:19.767917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:90696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.464 [2024-04-17 16:30:19.767934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:27.464 [2024-04-17 16:30:19.767960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:90704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.464 [2024-04-17 16:30:19.767977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:27.465 [2024-04-17 16:30:19.768003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:90712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.465 [2024-04-17 16:30:19.768019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:27.465 [2024-04-17 16:30:19.768046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:90720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.465 [2024-04-17 16:30:19.768063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:27.465 [2024-04-17 16:30:19.768090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:90728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.465 [2024-04-17 16:30:19.768106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:27.465 [2024-04-17 16:30:19.768133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:90736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.465 [2024-04-17 16:30:19.768149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:27.465 [2024-04-17 16:30:19.768176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:90744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.465 [2024-04-17 16:30:19.768192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:27.465 [2024-04-17 16:30:19.768227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:90752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.465 [2024-04-17 16:30:19.768244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:27.465 [2024-04-17 16:30:19.768272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:90760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.465 [2024-04-17 16:30:19.768288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:27.465 [2024-04-17 16:30:19.768315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:90768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.465 [2024-04-17 16:30:19.768332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:27.465 [2024-04-17 16:30:19.768358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:90776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.465 [2024-04-17 16:30:19.768375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:27.465 [2024-04-17 16:30:19.768402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:90784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.465 [2024-04-17 16:30:19.768418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:27.465 [2024-04-17 16:30:19.768445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:90792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.465 [2024-04-17 16:30:19.768461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:27.465 [2024-04-17 16:30:19.768488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:90800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.465 [2024-04-17 16:30:19.768504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:27.465 [2024-04-17 16:30:19.768530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:90808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.465 [2024-04-17 16:30:19.768547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:27.465 [2024-04-17 16:30:19.768573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:90816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.465 [2024-04-17 16:30:19.768590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:27.465 [2024-04-17 16:30:19.768616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:90824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.465 [2024-04-17 16:30:19.768633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:27.465 [2024-04-17 16:30:19.768659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:90832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.465 [2024-04-17 16:30:19.768676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:27.465 [2024-04-17 16:30:19.768702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:90840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.465 [2024-04-17 16:30:19.768719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:27.465 [2024-04-17 16:30:19.768746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:90848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.465 [2024-04-17 16:30:19.768768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:27.465 [2024-04-17 16:30:26.818387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.465 [2024-04-17 16:30:26.818460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:27.465 [2024-04-17 16:30:26.818522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.465 [2024-04-17 16:30:26.818544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:27.465 [2024-04-17 16:30:26.818569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.465 [2024-04-17 16:30:26.818586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:27.465 [2024-04-17 16:30:26.818609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.465 [2024-04-17 16:30:26.818625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:27.465 [2024-04-17 16:30:26.818648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.465 [2024-04-17 16:30:26.818664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:27.465 [2024-04-17 16:30:26.818686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.465 [2024-04-17 16:30:26.818703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:27.465 [2024-04-17 16:30:26.818725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.465 [2024-04-17 16:30:26.818742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:27.465 [2024-04-17 16:30:26.818764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:75248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.465 [2024-04-17 16:30:26.818795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:27.465 [2024-04-17 16:30:26.818820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.465 [2024-04-17 16:30:26.818836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:27.465 [2024-04-17 16:30:26.818859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:75264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.465 [2024-04-17 16:30:26.818875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:27.465 [2024-04-17 16:30:26.818898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.465 [2024-04-17 16:30:26.818914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:27.465 [2024-04-17 16:30:26.818936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:75280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.465 [2024-04-17 16:30:26.818979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:27.465 [2024-04-17 16:30:26.819003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:75288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.465 [2024-04-17 16:30:26.819019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:27.465 [2024-04-17 16:30:26.819041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:75296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.465 [2024-04-17 16:30:26.819058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:27.465 [2024-04-17 16:30:26.819080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:75304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.465 [2024-04-17 16:30:26.819096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:27.465 [2024-04-17 16:30:26.819119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:75312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.465 [2024-04-17 16:30:26.819134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:27.465 [2024-04-17 16:30:26.819156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:75320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.465 [2024-04-17 16:30:26.819174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:27.465 [2024-04-17 16:30:26.819198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:75328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.465 [2024-04-17 16:30:26.819214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:27.465 [2024-04-17 16:30:26.819247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:75336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.465 [2024-04-17 16:30:26.819263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:27.465 [2024-04-17 16:30:26.819286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:75344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.465 [2024-04-17 16:30:26.819302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:27.465 [2024-04-17 16:30:26.819324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:75352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.466 [2024-04-17 16:30:26.819340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:27.466 [2024-04-17 16:30:26.819362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:75360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.466 [2024-04-17 16:30:26.819378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:27.466 [2024-04-17 16:30:26.819400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.466 [2024-04-17 16:30:26.819416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:27.466 [2024-04-17 16:30:26.819444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:75376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.466 [2024-04-17 16:30:26.819460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:27.466 [2024-04-17 16:30:26.819492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:75384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.466 [2024-04-17 16:30:26.819509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:27.466 [2024-04-17 16:30:26.819532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:75392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.466 [2024-04-17 16:30:26.819548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:27.466 [2024-04-17 16:30:26.819571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:75400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.466 [2024-04-17 16:30:26.819587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:27.466 [2024-04-17 16:30:26.819609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:75408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.466 [2024-04-17 16:30:26.819626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:27.466 [2024-04-17 16:30:26.819649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:75416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.466 [2024-04-17 16:30:26.819665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:27.466 [2024-04-17 16:30:26.819687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.466 [2024-04-17 16:30:26.819703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:27.466 [2024-04-17 16:30:26.819726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.466 [2024-04-17 16:30:26.819743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:27.466 [2024-04-17 16:30:26.819765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:75432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.466 [2024-04-17 16:30:26.819795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:27.466 [2024-04-17 16:30:26.819819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:75440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.466 [2024-04-17 16:30:26.819836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:27.466 [2024-04-17 16:30:26.819860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:75448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.466 [2024-04-17 16:30:26.819876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:27.466 [2024-04-17 16:30:26.820013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:75456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.466 [2024-04-17 16:30:26.820039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:27.466 [2024-04-17 16:30:26.820069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:75464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.466 [2024-04-17 16:30:26.820087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:27.466 [2024-04-17 16:30:26.820127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.466 [2024-04-17 16:30:26.820145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:27.466 [2024-04-17 16:30:26.820170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:75480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.466 [2024-04-17 16:30:26.820186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:27.466 [2024-04-17 16:30:26.820211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:75488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.466 [2024-04-17 16:30:26.820228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:27.466 [2024-04-17 16:30:26.820253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:75496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.466 [2024-04-17 16:30:26.820269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:27.466 [2024-04-17 16:30:26.820293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:75504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.466 [2024-04-17 16:30:26.820310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:27.466 [2024-04-17 16:30:26.820335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:75512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.466 [2024-04-17 16:30:26.820351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:27.466 [2024-04-17 16:30:26.820376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:75520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.466 [2024-04-17 16:30:26.820392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:27.466 [2024-04-17 16:30:26.820417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:75528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.466 [2024-04-17 16:30:26.820434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:27.466 [2024-04-17 16:30:26.820460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:75536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.466 [2024-04-17 16:30:26.820476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:27.466 [2024-04-17 16:30:26.820501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:75544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.466 [2024-04-17 16:30:26.820518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:27.466 [2024-04-17 16:30:26.820543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:75552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.466 [2024-04-17 16:30:26.820559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:27.466 [2024-04-17 16:30:26.820590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:75560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.466 [2024-04-17 16:30:26.820606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:27.466 [2024-04-17 16:30:26.820638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:75568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.466 [2024-04-17 16:30:26.820656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:27.466 [2024-04-17 16:30:26.820681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.466 [2024-04-17 16:30:26.820698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:27.466 [2024-04-17 16:30:26.820723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:75584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.466 [2024-04-17 16:30:26.820739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:27.466 [2024-04-17 16:30:26.820764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:75592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.466 [2024-04-17 16:30:26.820798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:27.466 [2024-04-17 16:30:26.820825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:75600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.466 [2024-04-17 16:30:26.820842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:27.466 [2024-04-17 16:30:26.820867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:75608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.466 [2024-04-17 16:30:26.820883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:27.466 [2024-04-17 16:30:26.820908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:75616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.466 [2024-04-17 16:30:26.820925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:27.466 [2024-04-17 16:30:26.820950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.466 [2024-04-17 16:30:26.820966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:27.466 [2024-04-17 16:30:26.820991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.466 [2024-04-17 16:30:26.821007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:27.466 [2024-04-17 16:30:26.821032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.466 [2024-04-17 16:30:26.821049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:27.467 [2024-04-17 16:30:26.821074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.467 [2024-04-17 16:30:26.821090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:27.467 [2024-04-17 16:30:26.821115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.467 [2024-04-17 16:30:26.821132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:27.467 [2024-04-17 16:30:26.821157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.467 [2024-04-17 16:30:26.821182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:27.467 [2024-04-17 16:30:26.821208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.467 [2024-04-17 16:30:26.821224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:27.467 [2024-04-17 16:30:26.821250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.467 [2024-04-17 16:30:26.821267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:27.467 [2024-04-17 16:30:26.821396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.467 [2024-04-17 16:30:26.821420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:27.467 [2024-04-17 16:30:26.821451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:75624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.467 [2024-04-17 16:30:26.821470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:27.467 [2024-04-17 16:30:26.821496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:75632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.467 [2024-04-17 16:30:26.821514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:27.467 [2024-04-17 16:30:26.821541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:75640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.467 [2024-04-17 16:30:26.821557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:27.467 [2024-04-17 16:30:26.821584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:75648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.467 [2024-04-17 16:30:26.821600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:27.467 [2024-04-17 16:30:26.821627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:75656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.467 [2024-04-17 16:30:26.821643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:27.467 [2024-04-17 16:30:26.821671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:75664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.467 [2024-04-17 16:30:26.821688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:27.467 [2024-04-17 16:30:26.821714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:75672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.467 [2024-04-17 16:30:26.821731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:27.467 [2024-04-17 16:30:26.821757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:75680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.467 [2024-04-17 16:30:26.821788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:27.467 [2024-04-17 16:30:26.821819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:75688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.467 [2024-04-17 16:30:26.821845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:27.467 [2024-04-17 16:30:26.821875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:75696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.467 [2024-04-17 16:30:26.821892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:27.467 [2024-04-17 16:30:26.821919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:75704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.467 [2024-04-17 16:30:26.821935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:27.467 [2024-04-17 16:30:26.821962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:75712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.467 [2024-04-17 16:30:26.821979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:27.467 [2024-04-17 16:30:26.822014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.467 [2024-04-17 16:30:26.822031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:27.467 [2024-04-17 16:30:26.822058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.467 [2024-04-17 16:30:26.822074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:27.467 [2024-04-17 16:30:26.822116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:75736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.467 [2024-04-17 16:30:26.822133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:27.467 [2024-04-17 16:30:26.822159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:75744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.467 [2024-04-17 16:30:26.822176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:27.467 [2024-04-17 16:30:26.822211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:75752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.467 [2024-04-17 16:30:26.822228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:27.467 [2024-04-17 16:30:26.822254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:75760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.467 [2024-04-17 16:30:26.822271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:27.467 [2024-04-17 16:30:26.822298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:75768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.467 [2024-04-17 16:30:26.822314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:27.467 [2024-04-17 16:30:26.822341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:75776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.467 [2024-04-17 16:30:26.822358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:27.467 [2024-04-17 16:30:26.822384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:75784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.467 [2024-04-17 16:30:26.822401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:27.467 [2024-04-17 16:30:26.822436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.467 [2024-04-17 16:30:26.822454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:27.467 [2024-04-17 16:30:26.822480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:75800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.467 [2024-04-17 16:30:26.822496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.467 [2024-04-17 16:30:26.822523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.467 [2024-04-17 16:30:26.822540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.467 [2024-04-17 16:30:26.822567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:75816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.468 [2024-04-17 16:30:26.822584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:27.468 [2024-04-17 16:30:26.822611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.468 [2024-04-17 16:30:26.822628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:27.468 [2024-04-17 16:30:26.822655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:75832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.468 [2024-04-17 16:30:26.822672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:27.468 [2024-04-17 16:30:26.822699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:75840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.468 [2024-04-17 16:30:26.822715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:27.468 [2024-04-17 16:30:26.822742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:75848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.468 [2024-04-17 16:30:26.822759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:27.468 [2024-04-17 16:30:26.822806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.468 [2024-04-17 16:30:26.822825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:27.468 [2024-04-17 16:30:26.822852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.468 [2024-04-17 16:30:26.822869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:27.468 [2024-04-17 16:30:26.822896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.468 [2024-04-17 16:30:26.822912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:27.468 [2024-04-17 16:30:26.822939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.468 [2024-04-17 16:30:26.822963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:27.468 [2024-04-17 16:30:26.822999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.468 [2024-04-17 16:30:26.823016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:27.468 [2024-04-17 16:30:26.823043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.468 [2024-04-17 16:30:26.823059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:27.468 [2024-04-17 16:30:26.823086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.468 [2024-04-17 16:30:26.823102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:27.468 [2024-04-17 16:30:26.823128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:75912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.468 [2024-04-17 16:30:26.823145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:27.468 [2024-04-17 16:30:26.823171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:75920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.468 [2024-04-17 16:30:26.823187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:27.468 [2024-04-17 16:30:26.823214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.468 [2024-04-17 16:30:26.823230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:27.468 [2024-04-17 16:30:26.823257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:75936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.468 [2024-04-17 16:30:26.823274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:27.468 [2024-04-17 16:30:26.823300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:75944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.468 [2024-04-17 16:30:26.823316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:27.468 [2024-04-17 16:30:26.823343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.468 [2024-04-17 16:30:26.823359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:27.468 [2024-04-17 16:30:26.823386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:75960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.468 [2024-04-17 16:30:26.823403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:27.468 [2024-04-17 16:30:26.823429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.468 [2024-04-17 16:30:26.823446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:27.468 [2024-04-17 16:30:26.823473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:75976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.468 [2024-04-17 16:30:26.823489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:27.468 [2024-04-17 16:30:26.823516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:75984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.468 [2024-04-17 16:30:26.823541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:27.468 [2024-04-17 16:30:26.823569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:75992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.468 [2024-04-17 16:30:26.823586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:27.468 [2024-04-17 16:30:26.823613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:76000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.468 [2024-04-17 16:30:26.823630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:27.468 [2024-04-17 16:30:26.823656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:76008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.468 [2024-04-17 16:30:26.823673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:27.468 [2024-04-17 16:30:26.823699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:76016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.468 [2024-04-17 16:30:26.823716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:27.468 [2024-04-17 16:30:26.823742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:76024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.468 [2024-04-17 16:30:26.823758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:27.468 [2024-04-17 16:30:26.823797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:76032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.468 [2024-04-17 16:30:26.823816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:27.468 [2024-04-17 16:30:26.823843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:76040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.468 [2024-04-17 16:30:26.823860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:27.468 [2024-04-17 16:30:26.823886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:76048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.468 [2024-04-17 16:30:26.823903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:27.468 [2024-04-17 16:30:26.823929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:76056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.468 [2024-04-17 16:30:26.823946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:27.468 [2024-04-17 16:30:26.823973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:76064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.468 [2024-04-17 16:30:26.823990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:27.468 [2024-04-17 16:30:26.824016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:76072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.468 [2024-04-17 16:30:26.824033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:27.468 [2024-04-17 16:30:26.824059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:76080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.468 [2024-04-17 16:30:26.824093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:27.468 [2024-04-17 16:30:26.824121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:76088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.468 [2024-04-17 16:30:26.824137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:27.468 [2024-04-17 16:30:26.824163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:76096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.468 [2024-04-17 16:30:26.824180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:27.468 [2024-04-17 16:30:26.824208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:76104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.468 [2024-04-17 16:30:26.824224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:27.468 [2024-04-17 16:30:26.824251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.468 [2024-04-17 16:30:26.824268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:27.469 [2024-04-17 16:30:26.824294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:76120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.469 [2024-04-17 16:30:26.824311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:27.469 [2024-04-17 16:30:26.824338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:76128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.469 [2024-04-17 16:30:26.824355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:27.469 [2024-04-17 16:30:40.288514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.469 [2024-04-17 16:30:40.288563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.469 [2024-04-17 16:30:40.288591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:110280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.469 [2024-04-17 16:30:40.288608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.469 [2024-04-17 16:30:40.288625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.469 [2024-04-17 16:30:40.288640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.469 [2024-04-17 16:30:40.288657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.469 [2024-04-17 16:30:40.288671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.469 [2024-04-17 16:30:40.288687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.469 [2024-04-17 16:30:40.288702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.469 [2024-04-17 16:30:40.288718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.469 [2024-04-17 16:30:40.288733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.469 [2024-04-17 16:30:40.288787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.469 [2024-04-17 16:30:40.288807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.469 [2024-04-17 16:30:40.288823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.469 [2024-04-17 16:30:40.288837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.469 [2024-04-17 16:30:40.288853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.469 [2024-04-17 16:30:40.288868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.469 [2024-04-17 16:30:40.288884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.469 [2024-04-17 16:30:40.288898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.469 [2024-04-17 16:30:40.288914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.469 [2024-04-17 16:30:40.288928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.469 [2024-04-17 16:30:40.288944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.469 [2024-04-17 16:30:40.288959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.469 [2024-04-17 16:30:40.288975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.469 [2024-04-17 16:30:40.288989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.469 [2024-04-17 16:30:40.289004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:110376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.469 [2024-04-17 16:30:40.289019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.469 [2024-04-17 16:30:40.289046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.469 [2024-04-17 16:30:40.289061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.469 [2024-04-17 16:30:40.289077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:110392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.469 [2024-04-17 16:30:40.289091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.469 [2024-04-17 16:30:40.289107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.469 [2024-04-17 16:30:40.289122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.469 [2024-04-17 16:30:40.289138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:110408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.469 [2024-04-17 16:30:40.289152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.469 [2024-04-17 16:30:40.289168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.469 [2024-04-17 16:30:40.289191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.469 [2024-04-17 16:30:40.289208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.469 [2024-04-17 16:30:40.289223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.469 [2024-04-17 16:30:40.289239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.469 [2024-04-17 16:30:40.289255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.469 [2024-04-17 16:30:40.289272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:110440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.469 [2024-04-17 16:30:40.289287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.469 [2024-04-17 16:30:40.289303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.469 [2024-04-17 16:30:40.289318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.469 [2024-04-17 16:30:40.289334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:110456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.469 [2024-04-17 16:30:40.289348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.469 [2024-04-17 16:30:40.289365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.469 [2024-04-17 16:30:40.289380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.469 [2024-04-17 16:30:40.289396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.469 [2024-04-17 16:30:40.289411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.469 [2024-04-17 16:30:40.289427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.469 [2024-04-17 16:30:40.289442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.469 [2024-04-17 16:30:40.289458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.469 [2024-04-17 16:30:40.289472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.469 [2024-04-17 16:30:40.289488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.469 [2024-04-17 16:30:40.289503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.469 [2024-04-17 16:30:40.289519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:110504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.469 [2024-04-17 16:30:40.289534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.469 [2024-04-17 16:30:40.289550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:110512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.469 [2024-04-17 16:30:40.289564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.469 [2024-04-17 16:30:40.289587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:110520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.469 [2024-04-17 16:30:40.289602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.469 [2024-04-17 16:30:40.289619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.469 [2024-04-17 16:30:40.289634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.469 [2024-04-17 16:30:40.289650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:110088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.469 [2024-04-17 16:30:40.289664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.469 [2024-04-17 16:30:40.289681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:110096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.469 [2024-04-17 16:30:40.289695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.469 [2024-04-17 16:30:40.289712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:110104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.469 [2024-04-17 16:30:40.289726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.469 [2024-04-17 16:30:40.289742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:110112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.469 [2024-04-17 16:30:40.289759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.469 [2024-04-17 16:30:40.289787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:110120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.469 [2024-04-17 16:30:40.289805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.470 [2024-04-17 16:30:40.289821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:110128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.470 [2024-04-17 16:30:40.289836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.470 [2024-04-17 16:30:40.289852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.470 [2024-04-17 16:30:40.289866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.470 [2024-04-17 16:30:40.289883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:110144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.470 [2024-04-17 16:30:40.289897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.470 [2024-04-17 16:30:40.289913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:110152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.470 [2024-04-17 16:30:40.289928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.470 [2024-04-17 16:30:40.289944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.470 [2024-04-17 16:30:40.289959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.470 [2024-04-17 16:30:40.289975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.470 [2024-04-17 16:30:40.289997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.470 [2024-04-17 16:30:40.290014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.470 [2024-04-17 16:30:40.290029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.470 [2024-04-17 16:30:40.290045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.470 [2024-04-17 16:30:40.290060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.470 [2024-04-17 16:30:40.290086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.470 [2024-04-17 16:30:40.290103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.470 [2024-04-17 16:30:40.290120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.470 [2024-04-17 16:30:40.290135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.470 [2024-04-17 16:30:40.290152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:110584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.470 [2024-04-17 16:30:40.290166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.470 [2024-04-17 16:30:40.290182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:110592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.470 [2024-04-17 16:30:40.290197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.470 [2024-04-17 16:30:40.290224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:110600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.470 [2024-04-17 16:30:40.290239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.470 [2024-04-17 16:30:40.290255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:110608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.470 [2024-04-17 16:30:40.290269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.470 [2024-04-17 16:30:40.290285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:110616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.470 [2024-04-17 16:30:40.290301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.470 [2024-04-17 16:30:40.290318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.470 [2024-04-17 16:30:40.290332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.470 [2024-04-17 16:30:40.290348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:110632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.470 [2024-04-17 16:30:40.290363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.470 [2024-04-17 16:30:40.290379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:110640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.470 [2024-04-17 16:30:40.290393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.470 [2024-04-17 16:30:40.290410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:110648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.470 [2024-04-17 16:30:40.290431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.470 [2024-04-17 16:30:40.290448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:110656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.470 [2024-04-17 16:30:40.290463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.470 [2024-04-17 16:30:40.290479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:110664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.470 [2024-04-17 16:30:40.290494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.470 [2024-04-17 16:30:40.290510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:110672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.470 [2024-04-17 16:30:40.290524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.470 [2024-04-17 16:30:40.290540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:110680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.470 [2024-04-17 16:30:40.290555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.470 [2024-04-17 16:30:40.290571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.470 [2024-04-17 16:30:40.290586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.470 [2024-04-17 16:30:40.290602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:110696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.470 [2024-04-17 16:30:40.290616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.470 [2024-04-17 16:30:40.290633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:110704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.470 [2024-04-17 16:30:40.290647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.470 [2024-04-17 16:30:40.290663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:110712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.470 [2024-04-17 16:30:40.290678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.470 [2024-04-17 16:30:40.290694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.470 [2024-04-17 16:30:40.290708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.470 [2024-04-17 16:30:40.290724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.470 [2024-04-17 16:30:40.290739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.470 [2024-04-17 16:30:40.290755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.470 [2024-04-17 16:30:40.290770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.470 [2024-04-17 16:30:40.290799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.470 [2024-04-17 16:30:40.290815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.470 [2024-04-17 16:30:40.290839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.470 [2024-04-17 16:30:40.290855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.470 [2024-04-17 16:30:40.290871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.470 [2024-04-17 16:30:40.290891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.470 [2024-04-17 16:30:40.290907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.470 [2024-04-17 16:30:40.290922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.470 [2024-04-17 16:30:40.290939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.470 [2024-04-17 16:30:40.290953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.470 [2024-04-17 16:30:40.290969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.470 [2024-04-17 16:30:40.290984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.470 [2024-04-17 16:30:40.291000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.470 [2024-04-17 16:30:40.291015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.470 [2024-04-17 16:30:40.291031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.470 [2024-04-17 16:30:40.291045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.470 [2024-04-17 16:30:40.291061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:110808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.470 [2024-04-17 16:30:40.291076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.471 [2024-04-17 16:30:40.291092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.471 [2024-04-17 16:30:40.291107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.471 [2024-04-17 16:30:40.291123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:110824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.471 [2024-04-17 16:30:40.291137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.471 [2024-04-17 16:30:40.291153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.471 [2024-04-17 16:30:40.291168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.471 [2024-04-17 16:30:40.291184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:110840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.471 [2024-04-17 16:30:40.291199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.471 [2024-04-17 16:30:40.291215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.471 [2024-04-17 16:30:40.291235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.471 [2024-04-17 16:30:40.291253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:110856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.471 [2024-04-17 16:30:40.291267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.471 [2024-04-17 16:30:40.291283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:110864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.471 [2024-04-17 16:30:40.291298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.471 [2024-04-17 16:30:40.291314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.471 [2024-04-17 16:30:40.291330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.471 [2024-04-17 16:30:40.291347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:110880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.471 [2024-04-17 16:30:40.291362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.471 [2024-04-17 16:30:40.291378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.471 [2024-04-17 16:30:40.291393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.471 [2024-04-17 16:30:40.291409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.471 [2024-04-17 16:30:40.291434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.471 [2024-04-17 16:30:40.291450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:110904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.471 [2024-04-17 16:30:40.291465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.471 [2024-04-17 16:30:40.291481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.471 [2024-04-17 16:30:40.291496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.471 [2024-04-17 16:30:40.291531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.471 [2024-04-17 16:30:40.291548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110920 len:8 PRP1 0x0 PRP2 0x0 00:19:27.471 [2024-04-17 16:30:40.291563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.471 [2024-04-17 16:30:40.291581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.471 [2024-04-17 16:30:40.291593] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.471 [2024-04-17 16:30:40.291605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110928 len:8 PRP1 0x0 PRP2 0x0 00:19:27.471 [2024-04-17 16:30:40.291619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.471 [2024-04-17 16:30:40.291633] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.471 [2024-04-17 16:30:40.291645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.471 [2024-04-17 16:30:40.291656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110936 len:8 PRP1 0x0 PRP2 0x0 00:19:27.471 [2024-04-17 16:30:40.291677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.471 [2024-04-17 16:30:40.291693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.471 [2024-04-17 16:30:40.291704] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.471 [2024-04-17 16:30:40.291716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110944 len:8 PRP1 0x0 PRP2 0x0 00:19:27.471 [2024-04-17 16:30:40.291730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.471 [2024-04-17 16:30:40.291743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.471 [2024-04-17 16:30:40.291754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.471 [2024-04-17 16:30:40.291765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110952 len:8 PRP1 0x0 PRP2 0x0 00:19:27.471 [2024-04-17 16:30:40.291794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.471 [2024-04-17 16:30:40.291809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.471 [2024-04-17 16:30:40.291820] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.471 [2024-04-17 16:30:40.291831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110960 len:8 PRP1 0x0 PRP2 0x0 00:19:27.471 [2024-04-17 16:30:40.291845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.471 [2024-04-17 16:30:40.291859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.471 [2024-04-17 16:30:40.291870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.471 [2024-04-17 16:30:40.291881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110968 len:8 PRP1 0x0 PRP2 0x0 00:19:27.471 [2024-04-17 16:30:40.291895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.471 [2024-04-17 16:30:40.291909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.471 [2024-04-17 16:30:40.291920] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.471 [2024-04-17 16:30:40.291931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110976 len:8 PRP1 0x0 PRP2 0x0 00:19:27.471 [2024-04-17 16:30:40.291944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.471 [2024-04-17 16:30:40.291958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.471 [2024-04-17 16:30:40.291981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.471 [2024-04-17 16:30:40.291992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110984 len:8 PRP1 0x0 PRP2 0x0 00:19:27.471 [2024-04-17 16:30:40.292006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.471 [2024-04-17 16:30:40.292020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.471 [2024-04-17 16:30:40.292030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.471 [2024-04-17 16:30:40.292041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110992 len:8 PRP1 0x0 PRP2 0x0 00:19:27.471 [2024-04-17 16:30:40.292055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.471 [2024-04-17 16:30:40.292069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.471 [2024-04-17 16:30:40.292080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.471 [2024-04-17 16:30:40.292098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111000 len:8 PRP1 0x0 PRP2 0x0 00:19:27.471 [2024-04-17 16:30:40.292113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.471 [2024-04-17 16:30:40.292126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.471 [2024-04-17 16:30:40.292139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.471 [2024-04-17 16:30:40.292151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111008 len:8 PRP1 0x0 PRP2 0x0 00:19:27.471 [2024-04-17 16:30:40.292165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.471 [2024-04-17 16:30:40.292179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.471 [2024-04-17 16:30:40.292190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.471 [2024-04-17 16:30:40.292201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111016 len:8 PRP1 0x0 PRP2 0x0 00:19:27.471 [2024-04-17 16:30:40.292215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.471 [2024-04-17 16:30:40.292230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.471 [2024-04-17 16:30:40.292241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.471 [2024-04-17 16:30:40.292252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111024 len:8 PRP1 0x0 PRP2 0x0 00:19:27.471 [2024-04-17 16:30:40.292266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.471 [2024-04-17 16:30:40.292280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.471 [2024-04-17 16:30:40.292291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.472 [2024-04-17 16:30:40.292302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111032 len:8 PRP1 0x0 PRP2 0x0 00:19:27.472 [2024-04-17 16:30:40.292316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.472 [2024-04-17 16:30:40.292330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.472 [2024-04-17 16:30:40.292341] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.472 [2024-04-17 16:30:40.292353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111040 len:8 PRP1 0x0 PRP2 0x0 00:19:27.472 [2024-04-17 16:30:40.292367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.472 [2024-04-17 16:30:40.292381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.472 [2024-04-17 16:30:40.292392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.472 [2024-04-17 16:30:40.292403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111048 len:8 PRP1 0x0 PRP2 0x0 00:19:27.472 [2024-04-17 16:30:40.292417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.472 [2024-04-17 16:30:40.292431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.472 [2024-04-17 16:30:40.292442] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.472 [2024-04-17 16:30:40.292454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111056 len:8 PRP1 0x0 PRP2 0x0 00:19:27.472 [2024-04-17 16:30:40.292467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.472 [2024-04-17 16:30:40.292488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.472 [2024-04-17 16:30:40.292500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.472 [2024-04-17 16:30:40.292522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111064 len:8 PRP1 0x0 PRP2 0x0 00:19:27.472 [2024-04-17 16:30:40.292535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.472 [2024-04-17 16:30:40.292550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.472 [2024-04-17 16:30:40.292561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.472 [2024-04-17 16:30:40.292573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111072 len:8 PRP1 0x0 PRP2 0x0 00:19:27.472 [2024-04-17 16:30:40.292587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.472 [2024-04-17 16:30:40.292601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.472 [2024-04-17 16:30:40.292612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.472 [2024-04-17 16:30:40.292623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111080 len:8 PRP1 0x0 PRP2 0x0 00:19:27.472 [2024-04-17 16:30:40.292637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.472 [2024-04-17 16:30:40.292652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.472 [2024-04-17 16:30:40.292662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.472 [2024-04-17 16:30:40.292673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111088 len:8 PRP1 0x0 PRP2 0x0 00:19:27.472 [2024-04-17 16:30:40.292687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.472 [2024-04-17 16:30:40.292701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.472 [2024-04-17 16:30:40.292712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.472 [2024-04-17 16:30:40.292724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111096 len:8 PRP1 0x0 PRP2 0x0 00:19:27.472 [2024-04-17 16:30:40.292737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.472 [2024-04-17 16:30:40.292752] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.472 [2024-04-17 16:30:40.292763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.472 [2024-04-17 16:30:40.292787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110160 len:8 PRP1 0x0 PRP2 0x0 00:19:27.472 [2024-04-17 16:30:40.292803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.472 [2024-04-17 16:30:40.292817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.472 [2024-04-17 16:30:40.292828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.472 [2024-04-17 16:30:40.292840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110168 len:8 PRP1 0x0 PRP2 0x0 00:19:27.472 [2024-04-17 16:30:40.292853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.472 [2024-04-17 16:30:40.292868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.472 [2024-04-17 16:30:40.292878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.472 [2024-04-17 16:30:40.292890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110176 len:8 PRP1 0x0 PRP2 0x0 00:19:27.472 [2024-04-17 16:30:40.292910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.472 [2024-04-17 16:30:40.292925] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.472 [2024-04-17 16:30:40.292936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.472 [2024-04-17 16:30:40.292948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110184 len:8 PRP1 0x0 PRP2 0x0 00:19:27.472 [2024-04-17 16:30:40.292961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.472 [2024-04-17 16:30:40.292976] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.472 [2024-04-17 16:30:40.292987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.472 [2024-04-17 16:30:40.292998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110192 len:8 PRP1 0x0 PRP2 0x0 00:19:27.472 [2024-04-17 16:30:40.293012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.472 [2024-04-17 16:30:40.293027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.472 [2024-04-17 16:30:40.293038] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.472 [2024-04-17 16:30:40.293049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110200 len:8 PRP1 0x0 PRP2 0x0 00:19:27.472 [2024-04-17 16:30:40.293062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.472 [2024-04-17 16:30:40.293087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.472 [2024-04-17 16:30:40.293098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.472 [2024-04-17 16:30:40.293109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110208 len:8 PRP1 0x0 PRP2 0x0 00:19:27.472 [2024-04-17 16:30:40.293123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.472 [2024-04-17 16:30:40.293137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.472 [2024-04-17 16:30:40.293147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.472 [2024-04-17 16:30:40.293159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110216 len:8 PRP1 0x0 PRP2 0x0 00:19:27.472 [2024-04-17 16:30:40.293172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.472 [2024-04-17 16:30:40.293187] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.472 [2024-04-17 16:30:40.293198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.472 [2024-04-17 16:30:40.293209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110224 len:8 PRP1 0x0 PRP2 0x0 00:19:27.472 [2024-04-17 16:30:40.293223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.472 [2024-04-17 16:30:40.293237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.472 [2024-04-17 16:30:40.293248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.472 [2024-04-17 16:30:40.293259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110232 len:8 PRP1 0x0 PRP2 0x0 00:19:27.472 [2024-04-17 16:30:40.293282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.472 [2024-04-17 16:30:40.293297] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.472 [2024-04-17 16:30:40.293308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.472 [2024-04-17 16:30:40.293326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110240 len:8 PRP1 0x0 PRP2 0x0 00:19:27.472 [2024-04-17 16:30:40.293341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.472 [2024-04-17 16:30:40.293355] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.472 [2024-04-17 16:30:40.293366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.473 [2024-04-17 16:30:40.293377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110248 len:8 PRP1 0x0 PRP2 0x0 00:19:27.473 [2024-04-17 16:30:40.293391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.473 [2024-04-17 16:30:40.293406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.473 [2024-04-17 16:30:40.293417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.473 [2024-04-17 16:30:40.293428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110256 len:8 PRP1 0x0 PRP2 0x0 00:19:27.473 [2024-04-17 16:30:40.293441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.473 [2024-04-17 16:30:40.293456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.473 [2024-04-17 16:30:40.293467] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.473 [2024-04-17 16:30:40.293478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110264 len:8 PRP1 0x0 PRP2 0x0 00:19:27.473 [2024-04-17 16:30:40.293492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.473 [2024-04-17 16:30:40.293506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.473 [2024-04-17 16:30:40.293517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.473 [2024-04-17 16:30:40.293528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110272 len:8 PRP1 0x0 PRP2 0x0 00:19:27.473 [2024-04-17 16:30:40.293542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.473 [2024-04-17 16:30:40.293624] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1fe97c0 was disconnected and freed. reset controller. 00:19:27.473 [2024-04-17 16:30:40.293725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:27.473 [2024-04-17 16:30:40.293750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.473 [2024-04-17 16:30:40.293766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:27.473 [2024-04-17 16:30:40.293798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.473 [2024-04-17 16:30:40.293814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:27.473 [2024-04-17 16:30:40.293827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.473 [2024-04-17 16:30:40.293842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:27.473 [2024-04-17 16:30:40.293856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.473 [2024-04-17 16:30:40.293870] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2187900 is same with the state(5) to be set 00:19:27.473 [2024-04-17 16:30:40.295478] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:27.473 [2024-04-17 16:30:40.295540] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2187900 (9): Bad file descriptor 00:19:27.473 [2024-04-17 16:30:40.295679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:27.473 [2024-04-17 16:30:40.295740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:27.473 [2024-04-17 16:30:40.295764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2187900 with addr=10.0.0.2, port=4421 00:19:27.473 [2024-04-17 16:30:40.295797] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2187900 is same with the state(5) to be set 00:19:27.473 [2024-04-17 16:30:40.295834] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2187900 (9): Bad file descriptor 00:19:27.473 [2024-04-17 16:30:40.295859] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:27.473 [2024-04-17 16:30:40.295874] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:27.473 [2024-04-17 16:30:40.295889] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:27.473 [2024-04-17 16:30:40.295915] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:27.473 [2024-04-17 16:30:40.295931] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:27.473 [2024-04-17 16:30:50.370233] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:27.473 Received shutdown signal, test time was about 55.606764 seconds 00:19:27.473 00:19:27.473 Latency(us) 00:19:27.473 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.473 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:27.473 Verification LBA range: start 0x0 length 0x4000 00:19:27.473 Nvme0n1 : 55.61 7015.73 27.41 0.00 0.00 18215.11 703.77 7046430.72 00:19:27.473 =================================================================================================================== 00:19:27.473 Total : 7015.73 27.41 0.00 0.00 18215.11 703.77 7046430.72 00:19:27.473 16:31:00 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:27.473 16:31:01 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:19:27.473 16:31:01 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:27.473 16:31:01 -- host/multipath.sh@125 -- # nvmftestfini 00:19:27.473 16:31:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:27.473 16:31:01 -- nvmf/common.sh@117 -- # sync 00:19:27.732 16:31:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:27.732 16:31:01 -- nvmf/common.sh@120 -- # set +e 00:19:27.732 16:31:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:27.732 16:31:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:27.732 rmmod nvme_tcp 00:19:27.732 rmmod nvme_fabrics 00:19:27.732 rmmod nvme_keyring 00:19:27.732 16:31:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:27.732 16:31:01 -- nvmf/common.sh@124 -- # set -e 00:19:27.732 16:31:01 -- nvmf/common.sh@125 -- # return 0 00:19:27.732 16:31:01 -- nvmf/common.sh@478 -- # '[' -n 87042 ']' 00:19:27.732 16:31:01 -- nvmf/common.sh@479 -- # killprocess 87042 00:19:27.732 16:31:01 -- common/autotest_common.sh@936 -- # '[' -z 87042 ']' 00:19:27.732 16:31:01 -- common/autotest_common.sh@940 -- # kill -0 87042 00:19:27.732 16:31:01 -- common/autotest_common.sh@941 -- # uname 00:19:27.732 16:31:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:27.732 16:31:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87042 00:19:27.732 killing process with pid 87042 00:19:27.732 16:31:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:27.732 16:31:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:27.732 16:31:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87042' 00:19:27.732 16:31:01 -- common/autotest_common.sh@955 -- # kill 87042 00:19:27.732 16:31:01 -- common/autotest_common.sh@960 -- # wait 87042 00:19:27.990 16:31:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:27.990 16:31:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:27.991 16:31:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:27.991 16:31:01 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:27.991 16:31:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:27.991 16:31:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:27.991 16:31:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:27.991 16:31:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:27.991 16:31:01 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:27.991 00:19:27.991 real 1m2.274s 00:19:27.991 user 2m56.753s 00:19:27.991 sys 0m13.468s 00:19:27.991 16:31:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:27.991 16:31:01 -- common/autotest_common.sh@10 -- # set +x 00:19:27.991 ************************************ 00:19:27.991 END TEST nvmf_multipath 00:19:27.991 ************************************ 00:19:27.991 16:31:01 -- nvmf/nvmf.sh@115 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:27.991 16:31:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:27.991 16:31:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:27.991 16:31:01 -- common/autotest_common.sh@10 -- # set +x 00:19:27.991 ************************************ 00:19:27.991 START TEST nvmf_timeout 00:19:27.991 ************************************ 00:19:27.991 16:31:02 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:28.249 * Looking for test storage... 00:19:28.249 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:28.249 16:31:02 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:28.249 16:31:02 -- nvmf/common.sh@7 -- # uname -s 00:19:28.249 16:31:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:28.249 16:31:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:28.249 16:31:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:28.249 16:31:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:28.249 16:31:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:28.249 16:31:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:28.249 16:31:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:28.249 16:31:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:28.249 16:31:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:28.249 16:31:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:28.249 16:31:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:19:28.249 16:31:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:19:28.249 16:31:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:28.249 16:31:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:28.249 16:31:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:28.249 16:31:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:28.249 16:31:02 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:28.249 16:31:02 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:28.249 16:31:02 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:28.249 16:31:02 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:28.249 16:31:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.249 16:31:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.250 16:31:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.250 16:31:02 -- paths/export.sh@5 -- # export PATH 00:19:28.250 16:31:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.250 16:31:02 -- nvmf/common.sh@47 -- # : 0 00:19:28.250 16:31:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:28.250 16:31:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:28.250 16:31:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:28.250 16:31:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:28.250 16:31:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:28.250 16:31:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:28.250 16:31:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:28.250 16:31:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:28.250 16:31:02 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:28.250 16:31:02 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:28.250 16:31:02 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:28.250 16:31:02 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:28.250 16:31:02 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:28.250 16:31:02 -- host/timeout.sh@19 -- # nvmftestinit 00:19:28.250 16:31:02 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:28.250 16:31:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:28.250 16:31:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:28.250 16:31:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:28.250 16:31:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:28.250 16:31:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:28.250 16:31:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:28.250 16:31:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.250 16:31:02 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:19:28.250 16:31:02 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:19:28.250 16:31:02 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:19:28.250 16:31:02 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:19:28.250 16:31:02 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:19:28.250 16:31:02 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:19:28.250 16:31:02 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:28.250 16:31:02 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:28.250 16:31:02 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:28.250 16:31:02 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:28.250 16:31:02 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:28.250 16:31:02 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:28.250 16:31:02 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:28.250 16:31:02 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:28.250 16:31:02 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:28.250 16:31:02 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:28.250 16:31:02 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:28.250 16:31:02 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:28.250 16:31:02 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:28.250 16:31:02 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:28.250 Cannot find device "nvmf_tgt_br" 00:19:28.250 16:31:02 -- nvmf/common.sh@155 -- # true 00:19:28.250 16:31:02 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:28.250 Cannot find device "nvmf_tgt_br2" 00:19:28.250 16:31:02 -- nvmf/common.sh@156 -- # true 00:19:28.250 16:31:02 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:28.250 16:31:02 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:28.250 Cannot find device "nvmf_tgt_br" 00:19:28.250 16:31:02 -- nvmf/common.sh@158 -- # true 00:19:28.250 16:31:02 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:28.250 Cannot find device "nvmf_tgt_br2" 00:19:28.250 16:31:02 -- nvmf/common.sh@159 -- # true 00:19:28.250 16:31:02 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:28.250 16:31:02 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:28.250 16:31:02 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:28.250 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:28.250 16:31:02 -- nvmf/common.sh@162 -- # true 00:19:28.250 16:31:02 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:28.561 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:28.561 16:31:02 -- nvmf/common.sh@163 -- # true 00:19:28.561 16:31:02 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:28.561 16:31:02 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:28.561 16:31:02 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:28.561 16:31:02 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:28.561 16:31:02 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:28.561 16:31:02 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:28.561 16:31:02 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:28.561 16:31:02 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:28.561 16:31:02 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:28.561 16:31:02 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:28.561 16:31:02 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:28.561 16:31:02 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:28.561 16:31:02 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:28.561 16:31:02 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:28.561 16:31:02 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:28.561 16:31:02 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:28.561 16:31:02 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:28.561 16:31:02 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:28.561 16:31:02 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:28.561 16:31:02 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:28.561 16:31:02 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:28.561 16:31:02 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:28.561 16:31:02 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:28.562 16:31:02 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:28.562 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:28.562 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:19:28.562 00:19:28.562 --- 10.0.0.2 ping statistics --- 00:19:28.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:28.562 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:19:28.562 16:31:02 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:28.562 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:28.562 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:19:28.562 00:19:28.562 --- 10.0.0.3 ping statistics --- 00:19:28.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:28.562 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:19:28.562 16:31:02 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:28.562 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:28.562 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:19:28.562 00:19:28.562 --- 10.0.0.1 ping statistics --- 00:19:28.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:28.562 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:19:28.562 16:31:02 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:28.562 16:31:02 -- nvmf/common.sh@422 -- # return 0 00:19:28.562 16:31:02 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:28.562 16:31:02 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:28.562 16:31:02 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:28.562 16:31:02 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:28.562 16:31:02 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:28.562 16:31:02 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:28.562 16:31:02 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:28.562 16:31:02 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:19:28.562 16:31:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:28.562 16:31:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:28.562 16:31:02 -- common/autotest_common.sh@10 -- # set +x 00:19:28.562 16:31:02 -- nvmf/common.sh@470 -- # nvmfpid=88423 00:19:28.562 16:31:02 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:28.562 16:31:02 -- nvmf/common.sh@471 -- # waitforlisten 88423 00:19:28.562 16:31:02 -- common/autotest_common.sh@817 -- # '[' -z 88423 ']' 00:19:28.562 16:31:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.562 16:31:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:28.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.562 16:31:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.562 16:31:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:28.562 16:31:02 -- common/autotest_common.sh@10 -- # set +x 00:19:28.847 [2024-04-17 16:31:02.578369] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:19:28.847 [2024-04-17 16:31:02.578463] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:28.847 [2024-04-17 16:31:02.719025] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:28.847 [2024-04-17 16:31:02.850701] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:28.847 [2024-04-17 16:31:02.850769] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:28.847 [2024-04-17 16:31:02.850805] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:28.847 [2024-04-17 16:31:02.850816] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:28.847 [2024-04-17 16:31:02.850825] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:28.847 [2024-04-17 16:31:02.851924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:28.847 [2024-04-17 16:31:02.851930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.782 16:31:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:29.782 16:31:03 -- common/autotest_common.sh@850 -- # return 0 00:19:29.782 16:31:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:29.782 16:31:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:29.782 16:31:03 -- common/autotest_common.sh@10 -- # set +x 00:19:29.782 16:31:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:29.782 16:31:03 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:29.782 16:31:03 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:30.040 [2024-04-17 16:31:03.843883] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:30.040 16:31:03 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:30.298 Malloc0 00:19:30.298 16:31:04 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:30.556 16:31:04 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:30.815 16:31:04 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:31.073 [2024-04-17 16:31:04.864393] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:31.073 16:31:04 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:31.073 16:31:04 -- host/timeout.sh@32 -- # bdevperf_pid=88514 00:19:31.073 16:31:04 -- host/timeout.sh@34 -- # waitforlisten 88514 /var/tmp/bdevperf.sock 00:19:31.073 16:31:04 -- common/autotest_common.sh@817 -- # '[' -z 88514 ']' 00:19:31.073 16:31:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:31.073 16:31:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:31.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:31.073 16:31:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:31.073 16:31:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:31.073 16:31:04 -- common/autotest_common.sh@10 -- # set +x 00:19:31.073 [2024-04-17 16:31:04.925592] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:19:31.073 [2024-04-17 16:31:04.925668] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88514 ] 00:19:31.073 [2024-04-17 16:31:05.055190] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.332 [2024-04-17 16:31:05.163488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:31.897 16:31:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:31.897 16:31:05 -- common/autotest_common.sh@850 -- # return 0 00:19:31.897 16:31:05 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:32.463 16:31:06 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:32.724 NVMe0n1 00:19:32.724 16:31:06 -- host/timeout.sh@51 -- # rpc_pid=88562 00:19:32.724 16:31:06 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:32.724 16:31:06 -- host/timeout.sh@53 -- # sleep 1 00:19:32.724 Running I/O for 10 seconds... 00:19:33.657 16:31:07 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:33.917 [2024-04-17 16:31:07.796603] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.797327] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.797451] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.797525] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.797596] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.797656] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.797737] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.797839] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.797904] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.797964] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.798027] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.798124] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.798207] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.798283] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.798372] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.798449] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.798524] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.798598] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.798673] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.798732] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.798809] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.798888] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.798948] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.799033] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.799091] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.799166] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.799226] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.799289] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.799364] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.799429] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.799522] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.799604] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.799679] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.799754] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.799847] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.799923] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.799940] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.799948] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.799957] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.799965] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.799974] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.799983] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.799991] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.800000] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.800008] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.800016] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.800024] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.800032] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceccc0 is same with the state(5) to be set 00:19:33.917 [2024-04-17 16:31:07.800326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:79608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.917 [2024-04-17 16:31:07.800367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.917 [2024-04-17 16:31:07.800392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:79616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.917 [2024-04-17 16:31:07.800402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.917 [2024-04-17 16:31:07.800414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:79624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.917 [2024-04-17 16:31:07.800424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.917 [2024-04-17 16:31:07.800435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:79632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.918 [2024-04-17 16:31:07.800445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.918 [2024-04-17 16:31:07.800456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:79640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.918 [2024-04-17 16:31:07.800475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.918 [2024-04-17 16:31:07.800487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:79648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.918 [2024-04-17 16:31:07.800496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.918 [2024-04-17 16:31:07.800507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:79656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.918 [2024-04-17 16:31:07.800516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.918 [2024-04-17 16:31:07.800527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:79664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.918 [2024-04-17 16:31:07.800537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.918 [2024-04-17 16:31:07.800548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:79672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.918 [2024-04-17 16:31:07.800565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.918 [2024-04-17 16:31:07.800576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:79680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.918 [2024-04-17 16:31:07.800585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.918 [2024-04-17 16:31:07.800596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:79688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.918 [2024-04-17 16:31:07.800605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.918 [2024-04-17 16:31:07.800616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.918 [2024-04-17 16:31:07.800625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.918 [2024-04-17 16:31:07.800637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:79704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.918 [2024-04-17 16:31:07.800646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.918 [2024-04-17 16:31:07.800657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:79712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.918 [2024-04-17 16:31:07.800666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.918 [2024-04-17 16:31:07.800677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.918 [2024-04-17 16:31:07.800686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.918 [2024-04-17 16:31:07.800697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.918 [2024-04-17 16:31:07.800706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.918 [2024-04-17 16:31:07.800717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.918 [2024-04-17 16:31:07.800728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.918 [2024-04-17 16:31:07.800740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.918 [2024-04-17 16:31:07.800759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.918 [2024-04-17 16:31:07.800787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.918 [2024-04-17 16:31:07.800806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.918 [2024-04-17 16:31:07.800819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.918 [2024-04-17 16:31:07.800829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.918 [2024-04-17 16:31:07.800840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.918 [2024-04-17 16:31:07.800849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.918 [2024-04-17 16:31:07.800861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:79776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.918 [2024-04-17 16:31:07.800870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.918 [2024-04-17 16:31:07.800881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:79784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.918 [2024-04-17 16:31:07.800890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.918 [2024-04-17 16:31:07.800901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.918 [2024-04-17 16:31:07.800910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.918 [2024-04-17 16:31:07.800922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.918 [2024-04-17 16:31:07.800931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.918 [2024-04-17 16:31:07.800942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:79808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.918 [2024-04-17 16:31:07.800951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.918 [2024-04-17 16:31:07.800962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:79816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.918 [2024-04-17 16:31:07.800971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.918 [2024-04-17 16:31:07.800992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.918 [2024-04-17 16:31:07.801002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.918 [2024-04-17 16:31:07.801013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:79832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.918 [2024-04-17 16:31:07.801022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.918 [2024-04-17 16:31:07.801033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.918 [2024-04-17 16:31:07.801042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.918 [2024-04-17 16:31:07.801054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:79848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.918 [2024-04-17 16:31:07.801063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.918 [2024-04-17 16:31:07.801074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:79856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.918 [2024-04-17 16:31:07.801084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.918 [2024-04-17 16:31:07.801095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:79864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.918 [2024-04-17 16:31:07.801105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.918 [2024-04-17 16:31:07.801117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.918 [2024-04-17 16:31:07.801126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.918 [2024-04-17 16:31:07.801138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:79880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.918 [2024-04-17 16:31:07.801148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.918 [2024-04-17 16:31:07.801159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:79888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.918 [2024-04-17 16:31:07.801169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.918 [2024-04-17 16:31:07.801180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:79896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.918 [2024-04-17 16:31:07.801189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.918 [2024-04-17 16:31:07.801200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:79904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.918 [2024-04-17 16:31:07.801209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.918 [2024-04-17 16:31:07.801221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:79912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.918 [2024-04-17 16:31:07.801230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.919 [2024-04-17 16:31:07.801241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:79920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.919 [2024-04-17 16:31:07.801250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.919 [2024-04-17 16:31:07.801261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:79928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.919 [2024-04-17 16:31:07.801271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.919 [2024-04-17 16:31:07.801282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:79936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.919 [2024-04-17 16:31:07.801291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.919 [2024-04-17 16:31:07.801302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.919 [2024-04-17 16:31:07.801312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.919 [2024-04-17 16:31:07.801323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:79952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.919 [2024-04-17 16:31:07.801333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.919 [2024-04-17 16:31:07.801344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:79960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.919 [2024-04-17 16:31:07.801353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.919 [2024-04-17 16:31:07.801364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.919 [2024-04-17 16:31:07.801374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.919 [2024-04-17 16:31:07.801390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:79976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.919 [2024-04-17 16:31:07.801400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.919 [2024-04-17 16:31:07.801411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:79984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.919 [2024-04-17 16:31:07.801431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.919 [2024-04-17 16:31:07.801442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:79992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.919 [2024-04-17 16:31:07.801452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.919 [2024-04-17 16:31:07.801463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:80000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.919 [2024-04-17 16:31:07.801472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.919 [2024-04-17 16:31:07.801484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:80008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.919 [2024-04-17 16:31:07.801493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.919 [2024-04-17 16:31:07.801504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.919 [2024-04-17 16:31:07.801514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.919 [2024-04-17 16:31:07.801525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:80024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.919 [2024-04-17 16:31:07.801535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.919 [2024-04-17 16:31:07.801546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:80032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.919 [2024-04-17 16:31:07.801555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.919 [2024-04-17 16:31:07.801567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:80040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.919 [2024-04-17 16:31:07.801576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.919 [2024-04-17 16:31:07.801587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.919 [2024-04-17 16:31:07.801596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.919 [2024-04-17 16:31:07.801607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:80056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.919 [2024-04-17 16:31:07.801616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.919 [2024-04-17 16:31:07.801627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:80064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.919 [2024-04-17 16:31:07.801637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.919 [2024-04-17 16:31:07.801648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:80072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.919 [2024-04-17 16:31:07.801657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.919 [2024-04-17 16:31:07.801669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:80080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.919 [2024-04-17 16:31:07.801678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.919 [2024-04-17 16:31:07.801688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.919 [2024-04-17 16:31:07.801698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.919 [2024-04-17 16:31:07.801709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.919 [2024-04-17 16:31:07.801718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.919 [2024-04-17 16:31:07.801729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:80104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.919 [2024-04-17 16:31:07.801738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.919 [2024-04-17 16:31:07.801751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.919 [2024-04-17 16:31:07.801761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.919 [2024-04-17 16:31:07.801788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.919 [2024-04-17 16:31:07.801804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.919 [2024-04-17 16:31:07.801816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:80128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.919 [2024-04-17 16:31:07.801826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.919 [2024-04-17 16:31:07.801838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:80136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.919 [2024-04-17 16:31:07.801847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.919 [2024-04-17 16:31:07.801859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:80144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.919 [2024-04-17 16:31:07.801868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.919 [2024-04-17 16:31:07.801879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:80152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.919 [2024-04-17 16:31:07.801888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.919 [2024-04-17 16:31:07.801899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:80160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.919 [2024-04-17 16:31:07.801915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.919 [2024-04-17 16:31:07.801926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:80168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.919 [2024-04-17 16:31:07.801935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.919 [2024-04-17 16:31:07.801946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:80176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.919 [2024-04-17 16:31:07.801955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.919 [2024-04-17 16:31:07.801967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:80184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.919 [2024-04-17 16:31:07.801976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.919 [2024-04-17 16:31:07.801987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.919 [2024-04-17 16:31:07.801997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.920 [2024-04-17 16:31:07.802008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:80200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.920 [2024-04-17 16:31:07.802018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.920 [2024-04-17 16:31:07.802029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:80208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.920 [2024-04-17 16:31:07.802038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.920 [2024-04-17 16:31:07.802049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:80216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.920 [2024-04-17 16:31:07.802058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.920 [2024-04-17 16:31:07.802069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:80224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.920 [2024-04-17 16:31:07.802089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.920 [2024-04-17 16:31:07.802102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:80232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.920 [2024-04-17 16:31:07.802112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.920 [2024-04-17 16:31:07.802123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:80240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.920 [2024-04-17 16:31:07.802132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.920 [2024-04-17 16:31:07.802144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.920 [2024-04-17 16:31:07.802153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.920 [2024-04-17 16:31:07.802165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.920 [2024-04-17 16:31:07.802175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.920 [2024-04-17 16:31:07.802186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.920 [2024-04-17 16:31:07.802195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.920 [2024-04-17 16:31:07.802206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.920 [2024-04-17 16:31:07.802215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.920 [2024-04-17 16:31:07.802225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.920 [2024-04-17 16:31:07.802234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.920 [2024-04-17 16:31:07.802245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.920 [2024-04-17 16:31:07.802255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.920 [2024-04-17 16:31:07.802266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.920 [2024-04-17 16:31:07.802275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.920 [2024-04-17 16:31:07.802287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.920 [2024-04-17 16:31:07.802296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.920 [2024-04-17 16:31:07.802307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.920 [2024-04-17 16:31:07.802316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.920 [2024-04-17 16:31:07.802327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.920 [2024-04-17 16:31:07.802337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.920 [2024-04-17 16:31:07.802348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.920 [2024-04-17 16:31:07.802357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.920 [2024-04-17 16:31:07.802368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.920 [2024-04-17 16:31:07.802377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.920 [2024-04-17 16:31:07.802388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.920 [2024-04-17 16:31:07.802398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.920 [2024-04-17 16:31:07.802409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.920 [2024-04-17 16:31:07.802418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.920 [2024-04-17 16:31:07.802429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.920 [2024-04-17 16:31:07.802438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.920 [2024-04-17 16:31:07.802449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.920 [2024-04-17 16:31:07.802458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.920 [2024-04-17 16:31:07.802470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.920 [2024-04-17 16:31:07.802479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.920 [2024-04-17 16:31:07.802490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.920 [2024-04-17 16:31:07.802499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.920 [2024-04-17 16:31:07.802511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.920 [2024-04-17 16:31:07.802520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.920 [2024-04-17 16:31:07.802531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.920 [2024-04-17 16:31:07.802540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.920 [2024-04-17 16:31:07.802551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.920 [2024-04-17 16:31:07.802560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.920 [2024-04-17 16:31:07.802571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.920 [2024-04-17 16:31:07.802580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.920 [2024-04-17 16:31:07.802603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.920 [2024-04-17 16:31:07.802612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.920 [2024-04-17 16:31:07.802623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.920 [2024-04-17 16:31:07.802632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.920 [2024-04-17 16:31:07.802643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.920 [2024-04-17 16:31:07.802652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.920 [2024-04-17 16:31:07.802664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.920 [2024-04-17 16:31:07.802673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.920 [2024-04-17 16:31:07.802684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.920 [2024-04-17 16:31:07.802693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.920 [2024-04-17 16:31:07.802704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.920 [2024-04-17 16:31:07.802713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.920 [2024-04-17 16:31:07.802724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.920 [2024-04-17 16:31:07.802734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.920 [2024-04-17 16:31:07.802751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.920 [2024-04-17 16:31:07.802761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.920 [2024-04-17 16:31:07.802787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.920 [2024-04-17 16:31:07.802804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.920 [2024-04-17 16:31:07.802816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:80248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.920 [2024-04-17 16:31:07.802826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.921 [2024-04-17 16:31:07.802837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:80256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.921 [2024-04-17 16:31:07.802846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.921 [2024-04-17 16:31:07.802857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:80264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.921 [2024-04-17 16:31:07.802866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.921 [2024-04-17 16:31:07.802878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:80272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.921 [2024-04-17 16:31:07.802887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.921 [2024-04-17 16:31:07.802898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:80280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.921 [2024-04-17 16:31:07.802907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.921 [2024-04-17 16:31:07.802918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:80288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.921 [2024-04-17 16:31:07.802928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.921 [2024-04-17 16:31:07.802938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:80296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.921 [2024-04-17 16:31:07.802948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.921 [2024-04-17 16:31:07.802964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:80304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.921 [2024-04-17 16:31:07.802974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.921 [2024-04-17 16:31:07.802985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:80312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.921 [2024-04-17 16:31:07.802994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.921 [2024-04-17 16:31:07.803005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:80320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.921 [2024-04-17 16:31:07.803014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.921 [2024-04-17 16:31:07.803034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:80328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.921 [2024-04-17 16:31:07.803043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.921 [2024-04-17 16:31:07.803054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:80336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.921 [2024-04-17 16:31:07.803064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.921 [2024-04-17 16:31:07.803075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:80344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.921 [2024-04-17 16:31:07.803084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.921 [2024-04-17 16:31:07.803095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:80352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.921 [2024-04-17 16:31:07.803104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.921 [2024-04-17 16:31:07.803120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:80360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.921 [2024-04-17 16:31:07.803129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.921 [2024-04-17 16:31:07.803140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.921 [2024-04-17 16:31:07.803150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.921 [2024-04-17 16:31:07.803160] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171af70 is same with the state(5) to be set 00:19:33.921 [2024-04-17 16:31:07.803172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:33.921 [2024-04-17 16:31:07.803180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:33.921 [2024-04-17 16:31:07.803188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80368 len:8 PRP1 0x0 PRP2 0x0 00:19:33.921 [2024-04-17 16:31:07.803198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.921 [2024-04-17 16:31:07.803253] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x171af70 was disconnected and freed. reset controller. 00:19:33.921 [2024-04-17 16:31:07.803491] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:33.921 [2024-04-17 16:31:07.803578] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16b1dc0 (9): Bad file descriptor 00:19:33.921 [2024-04-17 16:31:07.803694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:33.921 [2024-04-17 16:31:07.803744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:33.921 [2024-04-17 16:31:07.803762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16b1dc0 with addr=10.0.0.2, port=4420 00:19:33.921 [2024-04-17 16:31:07.803793] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1dc0 is same with the state(5) to be set 00:19:33.921 [2024-04-17 16:31:07.803821] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16b1dc0 (9): Bad file descriptor 00:19:33.921 [2024-04-17 16:31:07.803839] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:33.921 [2024-04-17 16:31:07.803849] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:33.921 [2024-04-17 16:31:07.803860] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:33.921 [2024-04-17 16:31:07.803880] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:33.921 [2024-04-17 16:31:07.803892] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:33.921 16:31:07 -- host/timeout.sh@56 -- # sleep 2 00:19:35.824 [2024-04-17 16:31:09.804103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:35.824 [2024-04-17 16:31:09.804220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:35.824 [2024-04-17 16:31:09.804241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16b1dc0 with addr=10.0.0.2, port=4420 00:19:35.824 [2024-04-17 16:31:09.804257] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1dc0 is same with the state(5) to be set 00:19:35.824 [2024-04-17 16:31:09.804290] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16b1dc0 (9): Bad file descriptor 00:19:35.824 [2024-04-17 16:31:09.804325] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:35.824 [2024-04-17 16:31:09.804337] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:35.825 [2024-04-17 16:31:09.804349] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:35.825 [2024-04-17 16:31:09.804378] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:35.825 [2024-04-17 16:31:09.804400] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:35.825 16:31:09 -- host/timeout.sh@57 -- # get_controller 00:19:35.825 16:31:09 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:35.825 16:31:09 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:36.083 16:31:10 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:19:36.083 16:31:10 -- host/timeout.sh@58 -- # get_bdev 00:19:36.083 16:31:10 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:36.083 16:31:10 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:36.341 16:31:10 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:19:36.341 16:31:10 -- host/timeout.sh@61 -- # sleep 5 00:19:38.275 [2024-04-17 16:31:11.804594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:38.275 [2024-04-17 16:31:11.804709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:38.275 [2024-04-17 16:31:11.804737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16b1dc0 with addr=10.0.0.2, port=4420 00:19:38.275 [2024-04-17 16:31:11.804753] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1dc0 is same with the state(5) to be set 00:19:38.275 [2024-04-17 16:31:11.804794] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16b1dc0 (9): Bad file descriptor 00:19:38.275 [2024-04-17 16:31:11.804819] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:38.275 [2024-04-17 16:31:11.804830] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:38.275 [2024-04-17 16:31:11.804842] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:38.275 [2024-04-17 16:31:11.804871] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:38.275 [2024-04-17 16:31:11.804885] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:40.172 [2024-04-17 16:31:13.805041] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:41.105 00:19:41.105 Latency(us) 00:19:41.105 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:41.105 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:41.105 Verification LBA range: start 0x0 length 0x4000 00:19:41.105 NVMe0n1 : 8.17 1217.26 4.75 15.66 0.00 103651.13 2085.24 7046430.72 00:19:41.105 =================================================================================================================== 00:19:41.105 Total : 1217.26 4.75 15.66 0.00 103651.13 2085.24 7046430.72 00:19:41.106 0 00:19:41.363 16:31:15 -- host/timeout.sh@62 -- # get_controller 00:19:41.363 16:31:15 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:41.363 16:31:15 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:41.622 16:31:15 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:19:41.622 16:31:15 -- host/timeout.sh@63 -- # get_bdev 00:19:41.622 16:31:15 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:41.622 16:31:15 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:41.880 16:31:15 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:19:41.880 16:31:15 -- host/timeout.sh@65 -- # wait 88562 00:19:41.880 16:31:15 -- host/timeout.sh@67 -- # killprocess 88514 00:19:41.880 16:31:15 -- common/autotest_common.sh@936 -- # '[' -z 88514 ']' 00:19:41.880 16:31:15 -- common/autotest_common.sh@940 -- # kill -0 88514 00:19:41.880 16:31:15 -- common/autotest_common.sh@941 -- # uname 00:19:41.880 16:31:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:41.880 16:31:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88514 00:19:41.880 16:31:15 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:41.880 16:31:15 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:41.880 killing process with pid 88514 00:19:41.880 16:31:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88514' 00:19:41.880 16:31:15 -- common/autotest_common.sh@955 -- # kill 88514 00:19:41.880 Received shutdown signal, test time was about 9.262856 seconds 00:19:41.880 00:19:41.880 Latency(us) 00:19:41.880 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:41.880 =================================================================================================================== 00:19:41.880 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:41.880 16:31:15 -- common/autotest_common.sh@960 -- # wait 88514 00:19:42.138 16:31:16 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:42.396 [2024-04-17 16:31:16.412766] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:42.396 16:31:16 -- host/timeout.sh@74 -- # bdevperf_pid=88724 00:19:42.396 16:31:16 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:42.396 16:31:16 -- host/timeout.sh@76 -- # waitforlisten 88724 /var/tmp/bdevperf.sock 00:19:42.396 16:31:16 -- common/autotest_common.sh@817 -- # '[' -z 88724 ']' 00:19:42.396 16:31:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:42.396 16:31:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:42.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:42.396 16:31:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:42.396 16:31:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:42.396 16:31:16 -- common/autotest_common.sh@10 -- # set +x 00:19:42.654 [2024-04-17 16:31:16.477736] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:19:42.654 [2024-04-17 16:31:16.477826] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88724 ] 00:19:42.654 [2024-04-17 16:31:16.611539] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.912 [2024-04-17 16:31:16.731110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:43.478 16:31:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:43.478 16:31:17 -- common/autotest_common.sh@850 -- # return 0 00:19:43.478 16:31:17 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:43.737 16:31:17 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:19:44.305 NVMe0n1 00:19:44.305 16:31:18 -- host/timeout.sh@84 -- # rpc_pid=88767 00:19:44.305 16:31:18 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:44.305 16:31:18 -- host/timeout.sh@86 -- # sleep 1 00:19:44.305 Running I/O for 10 seconds... 00:19:45.239 16:31:19 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:45.504 [2024-04-17 16:31:19.346346] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.504 [2024-04-17 16:31:19.346404] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.504 [2024-04-17 16:31:19.346416] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.504 [2024-04-17 16:31:19.346425] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.504 [2024-04-17 16:31:19.346435] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.504 [2024-04-17 16:31:19.346444] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.504 [2024-04-17 16:31:19.346453] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.504 [2024-04-17 16:31:19.346461] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.504 [2024-04-17 16:31:19.346470] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.504 [2024-04-17 16:31:19.346479] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.504 [2024-04-17 16:31:19.346487] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.504 [2024-04-17 16:31:19.346495] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.504 [2024-04-17 16:31:19.346504] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.504 [2024-04-17 16:31:19.346512] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.504 [2024-04-17 16:31:19.346520] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.504 [2024-04-17 16:31:19.346529] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.504 [2024-04-17 16:31:19.346537] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.504 [2024-04-17 16:31:19.346546] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.504 [2024-04-17 16:31:19.346554] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.504 [2024-04-17 16:31:19.346562] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.504 [2024-04-17 16:31:19.346570] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.504 [2024-04-17 16:31:19.346579] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.504 [2024-04-17 16:31:19.346587] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.504 [2024-04-17 16:31:19.346596] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.504 [2024-04-17 16:31:19.346605] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.504 [2024-04-17 16:31:19.346614] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.504 [2024-04-17 16:31:19.346633] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.504 [2024-04-17 16:31:19.346641] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.504 [2024-04-17 16:31:19.346650] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.504 [2024-04-17 16:31:19.346660] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.504 [2024-04-17 16:31:19.346669] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.504 [2024-04-17 16:31:19.346678] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.504 [2024-04-17 16:31:19.346687] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.504 [2024-04-17 16:31:19.346695] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.346703] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.346712] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.346722] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.346730] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.346738] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.346746] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.346754] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.346762] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.346783] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.346794] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.346802] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.346810] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.346818] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.346827] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.346835] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.346843] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.346851] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.346860] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.346868] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.346877] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.346885] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.346893] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.346901] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.346909] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.346918] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.346926] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.346934] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.346943] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.346951] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.346960] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.346968] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.346977] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.346985] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.346994] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.347002] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.347010] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.347018] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.347027] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.347035] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.347043] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.347051] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.347058] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.347066] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.347074] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.347082] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.347090] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.347098] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.347106] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.347115] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.347123] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.347131] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.347139] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06d0 is same with the state(5) to be set 00:19:45.505 [2024-04-17 16:31:19.347557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.505 [2024-04-17 16:31:19.347600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.505 [2024-04-17 16:31:19.347626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.505 [2024-04-17 16:31:19.347638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.505 [2024-04-17 16:31:19.347650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.505 [2024-04-17 16:31:19.347660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.505 [2024-04-17 16:31:19.347672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.505 [2024-04-17 16:31:19.347681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.505 [2024-04-17 16:31:19.347693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.505 [2024-04-17 16:31:19.347702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.505 [2024-04-17 16:31:19.347714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.505 [2024-04-17 16:31:19.347723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.505 [2024-04-17 16:31:19.347735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.505 [2024-04-17 16:31:19.347744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.505 [2024-04-17 16:31:19.347756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.505 [2024-04-17 16:31:19.347765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.505 [2024-04-17 16:31:19.347791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.505 [2024-04-17 16:31:19.347803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.505 [2024-04-17 16:31:19.347814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.505 [2024-04-17 16:31:19.347824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.505 [2024-04-17 16:31:19.347835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.505 [2024-04-17 16:31:19.347845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.505 [2024-04-17 16:31:19.347856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.505 [2024-04-17 16:31:19.347866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.505 [2024-04-17 16:31:19.347877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.505 [2024-04-17 16:31:19.347887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.505 [2024-04-17 16:31:19.347899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.505 [2024-04-17 16:31:19.347908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.505 [2024-04-17 16:31:19.347919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.505 [2024-04-17 16:31:19.347929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.505 [2024-04-17 16:31:19.347941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.505 [2024-04-17 16:31:19.347950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.505 [2024-04-17 16:31:19.347961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.505 [2024-04-17 16:31:19.347972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.505 [2024-04-17 16:31:19.347984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.505 [2024-04-17 16:31:19.347994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.505 [2024-04-17 16:31:19.348005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.505 [2024-04-17 16:31:19.348015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.505 [2024-04-17 16:31:19.348026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.505 [2024-04-17 16:31:19.348036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.505 [2024-04-17 16:31:19.348047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.505 [2024-04-17 16:31:19.348057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.505 [2024-04-17 16:31:19.348068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.505 [2024-04-17 16:31:19.348078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.505 [2024-04-17 16:31:19.348089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.505 [2024-04-17 16:31:19.348098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.505 [2024-04-17 16:31:19.348110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.505 [2024-04-17 16:31:19.348119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.505 [2024-04-17 16:31:19.348131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.505 [2024-04-17 16:31:19.348140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.505 [2024-04-17 16:31:19.348152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.505 [2024-04-17 16:31:19.348162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.505 [2024-04-17 16:31:19.348173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.505 [2024-04-17 16:31:19.348182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.505 [2024-04-17 16:31:19.348194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.505 [2024-04-17 16:31:19.348203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.505 [2024-04-17 16:31:19.348214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.506 [2024-04-17 16:31:19.348224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.348235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:78608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.506 [2024-04-17 16:31:19.348245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.348257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.506 [2024-04-17 16:31:19.348266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.348278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.506 [2024-04-17 16:31:19.348287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.348306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.506 [2024-04-17 16:31:19.348316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.348328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.506 [2024-04-17 16:31:19.348337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.348348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.506 [2024-04-17 16:31:19.348357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.348369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.506 [2024-04-17 16:31:19.348379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.348390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.506 [2024-04-17 16:31:19.348399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.348411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.506 [2024-04-17 16:31:19.348420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.348431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.506 [2024-04-17 16:31:19.348441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.348452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.506 [2024-04-17 16:31:19.348462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.348473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.506 [2024-04-17 16:31:19.348482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.348494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.506 [2024-04-17 16:31:19.348504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.348516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.506 [2024-04-17 16:31:19.348526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.348537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.506 [2024-04-17 16:31:19.348546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.348557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.506 [2024-04-17 16:31:19.348567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.348578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.506 [2024-04-17 16:31:19.348587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.348599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.506 [2024-04-17 16:31:19.348608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.348619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.506 [2024-04-17 16:31:19.348629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.348646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.506 [2024-04-17 16:31:19.348655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.348667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.506 [2024-04-17 16:31:19.348677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.348689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.506 [2024-04-17 16:31:19.348698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.348709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.506 [2024-04-17 16:31:19.348719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.348730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.506 [2024-04-17 16:31:19.348740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.348751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.506 [2024-04-17 16:31:19.348760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.348781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.506 [2024-04-17 16:31:19.348792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.348804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.506 [2024-04-17 16:31:19.348813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.348824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.506 [2024-04-17 16:31:19.348834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.348845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.506 [2024-04-17 16:31:19.348855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.348866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.506 [2024-04-17 16:31:19.348876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.348887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.506 [2024-04-17 16:31:19.348897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.348908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.506 [2024-04-17 16:31:19.348917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.348928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.506 [2024-04-17 16:31:19.348938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.348949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.506 [2024-04-17 16:31:19.348959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.348970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.506 [2024-04-17 16:31:19.348979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.348993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.506 [2024-04-17 16:31:19.349003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.349014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.506 [2024-04-17 16:31:19.349023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.349034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.506 [2024-04-17 16:31:19.349043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.349054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.506 [2024-04-17 16:31:19.349063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.349074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.506 [2024-04-17 16:31:19.349084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.349095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.506 [2024-04-17 16:31:19.349104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.349115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.506 [2024-04-17 16:31:19.349124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.349135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.506 [2024-04-17 16:31:19.349144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.349155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.506 [2024-04-17 16:31:19.349164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.349176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.506 [2024-04-17 16:31:19.349186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.349197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.506 [2024-04-17 16:31:19.349206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.349217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.506 [2024-04-17 16:31:19.349226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.349237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.506 [2024-04-17 16:31:19.349246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.349257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.506 [2024-04-17 16:31:19.349266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.349278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.506 [2024-04-17 16:31:19.349287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.349298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.506 [2024-04-17 16:31:19.349307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.506 [2024-04-17 16:31:19.349322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.506 [2024-04-17 16:31:19.349332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.349343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.507 [2024-04-17 16:31:19.349352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.349364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.507 [2024-04-17 16:31:19.349373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.349384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.507 [2024-04-17 16:31:19.349394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.349405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.507 [2024-04-17 16:31:19.349414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.349425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.507 [2024-04-17 16:31:19.349434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.349446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.507 [2024-04-17 16:31:19.349456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.349467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.507 [2024-04-17 16:31:19.349477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.349488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.507 [2024-04-17 16:31:19.349503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.349515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.507 [2024-04-17 16:31:19.349524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.349535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.507 [2024-04-17 16:31:19.349545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.349556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.507 [2024-04-17 16:31:19.349566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.349577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.507 [2024-04-17 16:31:19.349586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.349597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.507 [2024-04-17 16:31:19.349606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.349617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.507 [2024-04-17 16:31:19.349627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.349638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.507 [2024-04-17 16:31:19.349647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.349662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.507 [2024-04-17 16:31:19.349672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.349683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.507 [2024-04-17 16:31:19.349692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.349703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.507 [2024-04-17 16:31:19.349712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.349723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.507 [2024-04-17 16:31:19.349733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.349743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.507 [2024-04-17 16:31:19.349753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.349763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.507 [2024-04-17 16:31:19.349782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.349795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.507 [2024-04-17 16:31:19.349804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.349815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.507 [2024-04-17 16:31:19.349824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.349835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.507 [2024-04-17 16:31:19.349849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.349861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.507 [2024-04-17 16:31:19.349871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.349882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.507 [2024-04-17 16:31:19.349891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.349902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.507 [2024-04-17 16:31:19.349912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.349923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.507 [2024-04-17 16:31:19.349932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.349943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.507 [2024-04-17 16:31:19.349952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.349963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.507 [2024-04-17 16:31:19.349972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.349983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.507 [2024-04-17 16:31:19.349993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.350008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.507 [2024-04-17 16:31:19.350018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.350029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.507 [2024-04-17 16:31:19.350039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.350050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.507 [2024-04-17 16:31:19.350059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.350070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.507 [2024-04-17 16:31:19.350090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.350102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.507 [2024-04-17 16:31:19.350112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.350123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.507 [2024-04-17 16:31:19.350132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.350143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.507 [2024-04-17 16:31:19.350152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.350163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.507 [2024-04-17 16:31:19.350172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.350183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.507 [2024-04-17 16:31:19.350197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.350209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.507 [2024-04-17 16:31:19.350218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.350229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.507 [2024-04-17 16:31:19.350239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.350250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.507 [2024-04-17 16:31:19.350259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.350270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.507 [2024-04-17 16:31:19.350280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.350291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.507 [2024-04-17 16:31:19.350300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.350311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.507 [2024-04-17 16:31:19.350320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.350330] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1090f70 is same with the state(5) to be set 00:19:45.507 [2024-04-17 16:31:19.350343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.507 [2024-04-17 16:31:19.350355] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.507 [2024-04-17 16:31:19.350363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78936 len:8 PRP1 0x0 PRP2 0x0 00:19:45.507 [2024-04-17 16:31:19.350373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.507 [2024-04-17 16:31:19.350432] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1090f70 was disconnected and freed. reset controller. 00:19:45.507 [2024-04-17 16:31:19.350685] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:45.507 [2024-04-17 16:31:19.350787] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1027dc0 (9): Bad file descriptor 00:19:45.507 [2024-04-17 16:31:19.350904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:45.507 [2024-04-17 16:31:19.350955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:45.507 [2024-04-17 16:31:19.350972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1027dc0 with addr=10.0.0.2, port=4420 00:19:45.507 [2024-04-17 16:31:19.350984] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1027dc0 is same with the state(5) to be set 00:19:45.507 [2024-04-17 16:31:19.351003] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1027dc0 (9): Bad file descriptor 00:19:45.507 [2024-04-17 16:31:19.351019] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:45.508 [2024-04-17 16:31:19.351029] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:45.508 [2024-04-17 16:31:19.351039] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:45.508 [2024-04-17 16:31:19.351059] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:45.508 [2024-04-17 16:31:19.351070] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:45.508 16:31:19 -- host/timeout.sh@90 -- # sleep 1 00:19:46.470 [2024-04-17 16:31:20.351234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:46.470 [2024-04-17 16:31:20.351354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:46.470 [2024-04-17 16:31:20.351374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1027dc0 with addr=10.0.0.2, port=4420 00:19:46.470 [2024-04-17 16:31:20.351390] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1027dc0 is same with the state(5) to be set 00:19:46.470 [2024-04-17 16:31:20.351420] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1027dc0 (9): Bad file descriptor 00:19:46.470 [2024-04-17 16:31:20.351440] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:46.470 [2024-04-17 16:31:20.351450] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:46.470 [2024-04-17 16:31:20.351461] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:46.470 [2024-04-17 16:31:20.351491] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:46.470 [2024-04-17 16:31:20.351503] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:46.470 16:31:20 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:46.727 [2024-04-17 16:31:20.609610] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:46.727 16:31:20 -- host/timeout.sh@92 -- # wait 88767 00:19:47.659 [2024-04-17 16:31:21.365472] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:54.218 00:19:54.218 Latency(us) 00:19:54.218 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:54.218 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:54.218 Verification LBA range: start 0x0 length 0x4000 00:19:54.218 NVMe0n1 : 10.00 6204.29 24.24 0.00 0.00 20592.20 603.23 3019898.88 00:19:54.218 =================================================================================================================== 00:19:54.218 Total : 6204.29 24.24 0.00 0.00 20592.20 603.23 3019898.88 00:19:54.218 0 00:19:54.218 16:31:28 -- host/timeout.sh@97 -- # rpc_pid=88884 00:19:54.218 16:31:28 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:54.218 16:31:28 -- host/timeout.sh@98 -- # sleep 1 00:19:54.477 Running I/O for 10 seconds... 00:19:55.415 16:31:29 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:55.677 [2024-04-17 16:31:29.502567] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.502626] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.502638] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.502647] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.502656] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.502664] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.502673] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.502681] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.502690] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.502698] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.502706] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.502715] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.502723] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.502731] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.502749] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.502757] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.502765] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.502788] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.502798] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.502806] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.502814] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.502822] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.502831] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.502839] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.502847] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.502855] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.502863] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.502872] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.502880] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.502888] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.502897] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.502905] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.502914] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.502922] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.502931] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.502939] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.502948] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.502956] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.502964] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.502972] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.502980] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.502988] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.502998] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.503006] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.503015] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.503023] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.503031] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.503039] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.503047] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.503056] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.503064] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.503072] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.503080] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.503088] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.503096] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.503104] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.503113] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.503121] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.503130] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.503138] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.503146] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.503155] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.503163] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.503171] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef0190 is same with the state(5) to be set 00:19:55.677 [2024-04-17 16:31:29.503435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.677 [2024-04-17 16:31:29.503477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.677 [2024-04-17 16:31:29.503502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.677 [2024-04-17 16:31:29.503514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.677 [2024-04-17 16:31:29.503527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.677 [2024-04-17 16:31:29.503537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.677 [2024-04-17 16:31:29.503549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.677 [2024-04-17 16:31:29.503558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.677 [2024-04-17 16:31:29.503570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.677 [2024-04-17 16:31:29.503580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.677 [2024-04-17 16:31:29.503592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.677 [2024-04-17 16:31:29.503601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.677 [2024-04-17 16:31:29.503612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.677 [2024-04-17 16:31:29.503622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.677 [2024-04-17 16:31:29.503633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.677 [2024-04-17 16:31:29.503642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.678 [2024-04-17 16:31:29.503654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.678 [2024-04-17 16:31:29.503663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.678 [2024-04-17 16:31:29.503674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:78784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.678 [2024-04-17 16:31:29.503683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.678 [2024-04-17 16:31:29.503695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.678 [2024-04-17 16:31:29.503704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.678 [2024-04-17 16:31:29.503715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.678 [2024-04-17 16:31:29.503725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.678 [2024-04-17 16:31:29.503736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.678 [2024-04-17 16:31:29.503745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.678 [2024-04-17 16:31:29.503756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:78816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.678 [2024-04-17 16:31:29.503766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.678 [2024-04-17 16:31:29.503804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.678 [2024-04-17 16:31:29.503815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.678 [2024-04-17 16:31:29.503826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.678 [2024-04-17 16:31:29.503836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.678 [2024-04-17 16:31:29.503847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.678 [2024-04-17 16:31:29.503859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.678 [2024-04-17 16:31:29.503870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.678 [2024-04-17 16:31:29.503880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.678 [2024-04-17 16:31:29.503892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.678 [2024-04-17 16:31:29.503901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.678 [2024-04-17 16:31:29.503912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.678 [2024-04-17 16:31:29.503922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.678 [2024-04-17 16:31:29.503934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.678 [2024-04-17 16:31:29.503943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.678 [2024-04-17 16:31:29.503954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.678 [2024-04-17 16:31:29.503963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.678 [2024-04-17 16:31:29.503975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.678 [2024-04-17 16:31:29.503984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.678 [2024-04-17 16:31:29.503996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.678 [2024-04-17 16:31:29.504006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.678 [2024-04-17 16:31:29.504017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.678 [2024-04-17 16:31:29.504027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.678 [2024-04-17 16:31:29.504039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.678 [2024-04-17 16:31:29.504049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.678 [2024-04-17 16:31:29.504060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.678 [2024-04-17 16:31:29.504069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.678 [2024-04-17 16:31:29.504080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:78928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.678 [2024-04-17 16:31:29.504090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.678 [2024-04-17 16:31:29.504101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.678 [2024-04-17 16:31:29.504110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.678 [2024-04-17 16:31:29.504121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.678 [2024-04-17 16:31:29.504131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.678 [2024-04-17 16:31:29.504142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.678 [2024-04-17 16:31:29.504153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.678 [2024-04-17 16:31:29.504164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.678 [2024-04-17 16:31:29.504173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.678 [2024-04-17 16:31:29.504185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.678 [2024-04-17 16:31:29.504195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.678 [2024-04-17 16:31:29.504207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.678 [2024-04-17 16:31:29.504216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.678 [2024-04-17 16:31:29.504228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.678 [2024-04-17 16:31:29.504237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.678 [2024-04-17 16:31:29.504248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.678 [2024-04-17 16:31:29.504257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.678 [2024-04-17 16:31:29.504268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.678 [2024-04-17 16:31:29.504278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.678 [2024-04-17 16:31:29.504289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:79008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.678 [2024-04-17 16:31:29.504299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.678 [2024-04-17 16:31:29.504319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.678 [2024-04-17 16:31:29.504328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.678 [2024-04-17 16:31:29.504340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.678 [2024-04-17 16:31:29.504350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.678 [2024-04-17 16:31:29.504361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.678 [2024-04-17 16:31:29.504371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.678 [2024-04-17 16:31:29.504383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.678 [2024-04-17 16:31:29.504392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.678 [2024-04-17 16:31:29.504403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:79048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.678 [2024-04-17 16:31:29.504412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.678 [2024-04-17 16:31:29.504424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:79056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.678 [2024-04-17 16:31:29.504433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.678 [2024-04-17 16:31:29.504444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:79064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.678 [2024-04-17 16:31:29.504454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.678 [2024-04-17 16:31:29.504465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:79072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.678 [2024-04-17 16:31:29.504474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.678 [2024-04-17 16:31:29.504486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.678 [2024-04-17 16:31:29.504495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.678 [2024-04-17 16:31:29.504506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:79088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.678 [2024-04-17 16:31:29.504516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.679 [2024-04-17 16:31:29.504528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:79096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.679 [2024-04-17 16:31:29.504537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.679 [2024-04-17 16:31:29.504549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:79104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.679 [2024-04-17 16:31:29.504559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.679 [2024-04-17 16:31:29.504570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:79112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.679 [2024-04-17 16:31:29.504579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.679 [2024-04-17 16:31:29.504590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:79120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.679 [2024-04-17 16:31:29.504600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.679 [2024-04-17 16:31:29.504611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.679 [2024-04-17 16:31:29.504620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.679 [2024-04-17 16:31:29.504631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:79136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.679 [2024-04-17 16:31:29.504641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.679 [2024-04-17 16:31:29.504652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:79144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.679 [2024-04-17 16:31:29.504662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.679 [2024-04-17 16:31:29.504672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.679 [2024-04-17 16:31:29.504682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.679 [2024-04-17 16:31:29.504693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:79160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.679 [2024-04-17 16:31:29.504703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.679 [2024-04-17 16:31:29.504714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:79168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.679 [2024-04-17 16:31:29.504723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.679 [2024-04-17 16:31:29.504734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:79176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.679 [2024-04-17 16:31:29.504744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.679 [2024-04-17 16:31:29.504755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:79184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.679 [2024-04-17 16:31:29.504765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.679 [2024-04-17 16:31:29.504787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:79192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.679 [2024-04-17 16:31:29.504799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.679 [2024-04-17 16:31:29.504818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:79200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.679 [2024-04-17 16:31:29.504828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.679 [2024-04-17 16:31:29.504841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:79208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.679 [2024-04-17 16:31:29.504859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.679 [2024-04-17 16:31:29.504870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.679 [2024-04-17 16:31:29.504879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.679 [2024-04-17 16:31:29.504891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.679 [2024-04-17 16:31:29.504901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.679 [2024-04-17 16:31:29.504912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.679 [2024-04-17 16:31:29.504922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.679 [2024-04-17 16:31:29.504933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:79240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.679 [2024-04-17 16:31:29.504943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.679 [2024-04-17 16:31:29.504954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:79248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.679 [2024-04-17 16:31:29.504963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.679 [2024-04-17 16:31:29.504974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:79256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.679 [2024-04-17 16:31:29.504983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.679 [2024-04-17 16:31:29.504995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:79264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.679 [2024-04-17 16:31:29.505004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.679 [2024-04-17 16:31:29.505015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:79272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.679 [2024-04-17 16:31:29.505024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.679 [2024-04-17 16:31:29.505035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:79280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.679 [2024-04-17 16:31:29.505045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.679 [2024-04-17 16:31:29.505056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.679 [2024-04-17 16:31:29.505065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.679 [2024-04-17 16:31:29.505077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:79296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.679 [2024-04-17 16:31:29.505086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.679 [2024-04-17 16:31:29.505097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:79304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.679 [2024-04-17 16:31:29.505107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.679 [2024-04-17 16:31:29.505118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:79312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.679 [2024-04-17 16:31:29.505134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.679 [2024-04-17 16:31:29.505146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:79320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.679 [2024-04-17 16:31:29.505156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.679 [2024-04-17 16:31:29.505167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:79328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.679 [2024-04-17 16:31:29.505176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.679 [2024-04-17 16:31:29.505187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:79336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.679 [2024-04-17 16:31:29.505196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.679 [2024-04-17 16:31:29.505207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:79344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.679 [2024-04-17 16:31:29.505217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.679 [2024-04-17 16:31:29.505229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.679 [2024-04-17 16:31:29.505239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.679 [2024-04-17 16:31:29.505250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:79360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.679 [2024-04-17 16:31:29.505259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.679 [2024-04-17 16:31:29.505270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:79368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.679 [2024-04-17 16:31:29.505279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.679 [2024-04-17 16:31:29.505291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:79376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.679 [2024-04-17 16:31:29.505300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.679 [2024-04-17 16:31:29.505311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:79384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.679 [2024-04-17 16:31:29.505320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.679 [2024-04-17 16:31:29.505332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:79392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.679 [2024-04-17 16:31:29.505341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.679 [2024-04-17 16:31:29.505352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:79400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.679 [2024-04-17 16:31:29.505361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.679 [2024-04-17 16:31:29.505373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:79408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.679 [2024-04-17 16:31:29.505382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.680 [2024-04-17 16:31:29.505393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:79416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.680 [2024-04-17 16:31:29.505402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.680 [2024-04-17 16:31:29.505413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:79424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.680 [2024-04-17 16:31:29.505423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.680 [2024-04-17 16:31:29.505434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:79432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.680 [2024-04-17 16:31:29.505443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.680 [2024-04-17 16:31:29.505454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.680 [2024-04-17 16:31:29.505467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.680 [2024-04-17 16:31:29.505478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.680 [2024-04-17 16:31:29.505487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.680 [2024-04-17 16:31:29.505499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:79456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.680 [2024-04-17 16:31:29.505508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.680 [2024-04-17 16:31:29.505519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:79464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.680 [2024-04-17 16:31:29.505529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.680 [2024-04-17 16:31:29.505540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:79472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.680 [2024-04-17 16:31:29.505549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.680 [2024-04-17 16:31:29.505566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:79480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.680 [2024-04-17 16:31:29.505576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.680 [2024-04-17 16:31:29.505587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:79488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.680 [2024-04-17 16:31:29.505597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.680 [2024-04-17 16:31:29.505608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:79496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.680 [2024-04-17 16:31:29.505617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.680 [2024-04-17 16:31:29.505628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:79504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.680 [2024-04-17 16:31:29.505637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.680 [2024-04-17 16:31:29.505648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.680 [2024-04-17 16:31:29.505658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.680 [2024-04-17 16:31:29.505670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:79520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.680 [2024-04-17 16:31:29.505679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.680 [2024-04-17 16:31:29.505690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.680 [2024-04-17 16:31:29.505699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.680 [2024-04-17 16:31:29.505710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:79536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.680 [2024-04-17 16:31:29.505720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.680 [2024-04-17 16:31:29.505731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:79544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.680 [2024-04-17 16:31:29.505740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.680 [2024-04-17 16:31:29.505751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:79552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.680 [2024-04-17 16:31:29.505760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.680 [2024-04-17 16:31:29.505781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:79560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.680 [2024-04-17 16:31:29.505793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.680 [2024-04-17 16:31:29.505804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:79568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.680 [2024-04-17 16:31:29.505817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.680 [2024-04-17 16:31:29.505828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:79576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.680 [2024-04-17 16:31:29.505838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.680 [2024-04-17 16:31:29.505849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.680 [2024-04-17 16:31:29.505858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.680 [2024-04-17 16:31:29.505869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:79592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.680 [2024-04-17 16:31:29.505879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.680 [2024-04-17 16:31:29.505890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:79600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.680 [2024-04-17 16:31:29.505899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.680 [2024-04-17 16:31:29.505915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.680 [2024-04-17 16:31:29.505924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.680 [2024-04-17 16:31:29.505935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:79616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.680 [2024-04-17 16:31:29.505945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.680 [2024-04-17 16:31:29.505956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:79624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.680 [2024-04-17 16:31:29.505965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.680 [2024-04-17 16:31:29.505976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:79632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.680 [2024-04-17 16:31:29.505986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.680 [2024-04-17 16:31:29.505996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:79640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.680 [2024-04-17 16:31:29.506006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.680 [2024-04-17 16:31:29.506017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.680 [2024-04-17 16:31:29.506026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.680 [2024-04-17 16:31:29.506037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:79656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.680 [2024-04-17 16:31:29.506046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.680 [2024-04-17 16:31:29.506058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:79664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.680 [2024-04-17 16:31:29.506067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.680 [2024-04-17 16:31:29.506087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:79672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.680 [2024-04-17 16:31:29.506108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.680 [2024-04-17 16:31:29.506120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:79680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.680 [2024-04-17 16:31:29.506129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.680 [2024-04-17 16:31:29.506140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:79688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.680 [2024-04-17 16:31:29.506149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.680 [2024-04-17 16:31:29.506160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.680 [2024-04-17 16:31:29.506172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.680 [2024-04-17 16:31:29.506183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:79704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.680 [2024-04-17 16:31:29.506193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.680 [2024-04-17 16:31:29.506204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:79712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.680 [2024-04-17 16:31:29.506213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.680 [2024-04-17 16:31:29.506225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.680 [2024-04-17 16:31:29.506234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.680 [2024-04-17 16:31:29.506244] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108f3a0 is same with the state(5) to be set 00:19:55.680 [2024-04-17 16:31:29.506257] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:55.680 [2024-04-17 16:31:29.506269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:55.681 [2024-04-17 16:31:29.506278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79728 len:8 PRP1 0x0 PRP2 0x0 00:19:55.681 [2024-04-17 16:31:29.506287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.681 [2024-04-17 16:31:29.506342] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x108f3a0 was disconnected and freed. reset controller. 00:19:55.681 [2024-04-17 16:31:29.506417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:55.681 [2024-04-17 16:31:29.506450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.681 [2024-04-17 16:31:29.506462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:55.681 [2024-04-17 16:31:29.506471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.681 [2024-04-17 16:31:29.506481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:55.681 [2024-04-17 16:31:29.506490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.681 [2024-04-17 16:31:29.506500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:55.681 [2024-04-17 16:31:29.506509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.681 [2024-04-17 16:31:29.506518] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1027dc0 is same with the state(5) to be set 00:19:55.681 [2024-04-17 16:31:29.506743] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:55.681 [2024-04-17 16:31:29.506791] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1027dc0 (9): Bad file descriptor 00:19:55.681 [2024-04-17 16:31:29.506898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:55.681 [2024-04-17 16:31:29.506962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:55.681 [2024-04-17 16:31:29.506985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1027dc0 with addr=10.0.0.2, port=4420 00:19:55.681 [2024-04-17 16:31:29.506997] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1027dc0 is same with the state(5) to be set 00:19:55.681 [2024-04-17 16:31:29.507017] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1027dc0 (9): Bad file descriptor 00:19:55.681 [2024-04-17 16:31:29.507033] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:55.681 [2024-04-17 16:31:29.507049] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:55.681 [2024-04-17 16:31:29.507058] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:55.681 [2024-04-17 16:31:29.520291] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:55.681 [2024-04-17 16:31:29.520344] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:55.681 16:31:29 -- host/timeout.sh@101 -- # sleep 3 00:19:56.617 [2024-04-17 16:31:30.520531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:56.617 [2024-04-17 16:31:30.520649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:56.617 [2024-04-17 16:31:30.520670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1027dc0 with addr=10.0.0.2, port=4420 00:19:56.617 [2024-04-17 16:31:30.520685] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1027dc0 is same with the state(5) to be set 00:19:56.617 [2024-04-17 16:31:30.520716] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1027dc0 (9): Bad file descriptor 00:19:56.617 [2024-04-17 16:31:30.520750] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:56.617 [2024-04-17 16:31:30.520762] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:56.617 [2024-04-17 16:31:30.520787] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:56.617 [2024-04-17 16:31:30.520819] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:56.617 [2024-04-17 16:31:30.520832] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:57.622 [2024-04-17 16:31:31.521037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:57.622 [2024-04-17 16:31:31.521187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:57.622 [2024-04-17 16:31:31.521221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1027dc0 with addr=10.0.0.2, port=4420 00:19:57.622 [2024-04-17 16:31:31.521244] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1027dc0 is same with the state(5) to be set 00:19:57.622 [2024-04-17 16:31:31.521288] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1027dc0 (9): Bad file descriptor 00:19:57.622 [2024-04-17 16:31:31.521342] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:57.622 [2024-04-17 16:31:31.521364] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:57.622 [2024-04-17 16:31:31.521382] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:57.622 [2024-04-17 16:31:31.521429] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:57.622 [2024-04-17 16:31:31.521452] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:58.559 [2024-04-17 16:31:32.521959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.559 [2024-04-17 16:31:32.522152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.559 [2024-04-17 16:31:32.522173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1027dc0 with addr=10.0.0.2, port=4420 00:19:58.559 [2024-04-17 16:31:32.522189] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1027dc0 is same with the state(5) to be set 00:19:58.559 [2024-04-17 16:31:32.522444] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1027dc0 (9): Bad file descriptor 00:19:58.559 [2024-04-17 16:31:32.522688] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:58.559 [2024-04-17 16:31:32.522710] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:58.559 [2024-04-17 16:31:32.522722] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:58.559 [2024-04-17 16:31:32.526733] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.559 [2024-04-17 16:31:32.526786] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:58.559 16:31:32 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:58.817 [2024-04-17 16:31:32.786235] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:58.817 16:31:32 -- host/timeout.sh@103 -- # wait 88884 00:19:59.753 [2024-04-17 16:31:33.558515] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:05.022 00:20:05.022 Latency(us) 00:20:05.022 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.022 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:05.022 Verification LBA range: start 0x0 length 0x4000 00:20:05.022 NVMe0n1 : 10.01 4683.90 18.30 3644.78 0.00 15334.75 718.66 3019898.88 00:20:05.022 =================================================================================================================== 00:20:05.022 Total : 4683.90 18.30 3644.78 0.00 15334.75 0.00 3019898.88 00:20:05.022 0 00:20:05.022 16:31:38 -- host/timeout.sh@105 -- # killprocess 88724 00:20:05.022 16:31:38 -- common/autotest_common.sh@936 -- # '[' -z 88724 ']' 00:20:05.022 16:31:38 -- common/autotest_common.sh@940 -- # kill -0 88724 00:20:05.022 16:31:38 -- common/autotest_common.sh@941 -- # uname 00:20:05.022 16:31:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:05.022 16:31:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88724 00:20:05.022 killing process with pid 88724 00:20:05.022 Received shutdown signal, test time was about 10.000000 seconds 00:20:05.022 00:20:05.022 Latency(us) 00:20:05.022 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.022 =================================================================================================================== 00:20:05.022 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:05.022 16:31:38 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:05.022 16:31:38 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:05.022 16:31:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88724' 00:20:05.022 16:31:38 -- common/autotest_common.sh@955 -- # kill 88724 00:20:05.022 16:31:38 -- common/autotest_common.sh@960 -- # wait 88724 00:20:05.022 16:31:38 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:20:05.022 16:31:38 -- host/timeout.sh@110 -- # bdevperf_pid=89010 00:20:05.022 16:31:38 -- host/timeout.sh@112 -- # waitforlisten 89010 /var/tmp/bdevperf.sock 00:20:05.022 16:31:38 -- common/autotest_common.sh@817 -- # '[' -z 89010 ']' 00:20:05.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:05.022 16:31:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:05.022 16:31:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:05.022 16:31:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:05.022 16:31:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:05.022 16:31:38 -- common/autotest_common.sh@10 -- # set +x 00:20:05.022 [2024-04-17 16:31:38.731063] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:20:05.022 [2024-04-17 16:31:38.731931] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89010 ] 00:20:05.022 [2024-04-17 16:31:38.866901] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.022 [2024-04-17 16:31:38.985624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:05.956 16:31:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:05.956 16:31:39 -- common/autotest_common.sh@850 -- # return 0 00:20:05.956 16:31:39 -- host/timeout.sh@116 -- # dtrace_pid=89038 00:20:05.956 16:31:39 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 89010 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:20:05.956 16:31:39 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:20:06.215 16:31:40 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:20:06.473 NVMe0n1 00:20:06.473 16:31:40 -- host/timeout.sh@124 -- # rpc_pid=89092 00:20:06.473 16:31:40 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:06.473 16:31:40 -- host/timeout.sh@125 -- # sleep 1 00:20:06.733 Running I/O for 10 seconds... 00:20:07.668 16:31:41 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:07.930 [2024-04-17 16:31:41.737928] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.930 [2024-04-17 16:31:41.737987] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.930 [2024-04-17 16:31:41.737999] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.930 [2024-04-17 16:31:41.738009] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.930 [2024-04-17 16:31:41.738017] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.930 [2024-04-17 16:31:41.738026] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.930 [2024-04-17 16:31:41.738035] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.930 [2024-04-17 16:31:41.738044] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.930 [2024-04-17 16:31:41.738053] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.930 [2024-04-17 16:31:41.738062] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.930 [2024-04-17 16:31:41.738071] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.930 [2024-04-17 16:31:41.738093] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.930 [2024-04-17 16:31:41.738101] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.930 [2024-04-17 16:31:41.738110] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.930 [2024-04-17 16:31:41.738118] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.930 [2024-04-17 16:31:41.738126] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.930 [2024-04-17 16:31:41.738135] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.930 [2024-04-17 16:31:41.738143] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.930 [2024-04-17 16:31:41.738151] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.930 [2024-04-17 16:31:41.738160] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.930 [2024-04-17 16:31:41.738168] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.930 [2024-04-17 16:31:41.738187] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.930 [2024-04-17 16:31:41.738195] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.930 [2024-04-17 16:31:41.738203] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.930 [2024-04-17 16:31:41.738211] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.930 [2024-04-17 16:31:41.738220] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.930 [2024-04-17 16:31:41.738228] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.930 [2024-04-17 16:31:41.738236] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.930 [2024-04-17 16:31:41.738246] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.930 [2024-04-17 16:31:41.738254] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.930 [2024-04-17 16:31:41.738263] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.930 [2024-04-17 16:31:41.738272] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.930 [2024-04-17 16:31:41.738281] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.930 [2024-04-17 16:31:41.738289] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.930 [2024-04-17 16:31:41.738298] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.930 [2024-04-17 16:31:41.738306] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.930 [2024-04-17 16:31:41.738315] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.930 [2024-04-17 16:31:41.738324] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.930 [2024-04-17 16:31:41.738332] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.930 [2024-04-17 16:31:41.738340] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.930 [2024-04-17 16:31:41.738349] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.930 [2024-04-17 16:31:41.738357] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.930 [2024-04-17 16:31:41.738365] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.930 [2024-04-17 16:31:41.738373] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.930 [2024-04-17 16:31:41.738381] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.930 [2024-04-17 16:31:41.738389] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738398] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738406] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738414] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738422] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738431] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738440] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738448] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738458] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738467] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738475] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738483] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738501] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738509] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738517] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738525] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738534] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738542] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738550] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738558] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738566] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738575] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738584] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738592] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738600] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738608] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738616] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738624] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738633] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738641] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738649] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738657] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738665] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738674] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738682] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738690] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738698] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738706] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738716] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738724] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738732] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738741] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738749] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738758] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738786] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738797] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738805] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738814] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738822] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738830] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738839] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738847] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738855] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738864] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738873] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738881] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738890] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738898] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738906] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738914] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738923] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738931] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738939] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738947] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738956] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738964] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738972] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738980] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738988] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.738996] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.739005] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.739013] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.739021] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.739029] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.739038] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.739046] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.739054] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.739062] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.739078] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.739085] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.739094] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91620 is same with the state(5) to be set 00:20:07.931 [2024-04-17 16:31:41.739440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:92440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.931 [2024-04-17 16:31:41.739482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.931 [2024-04-17 16:31:41.739505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:35936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.931 [2024-04-17 16:31:41.739518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.931 [2024-04-17 16:31:41.739530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:119592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.932 [2024-04-17 16:31:41.739540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.932 [2024-04-17 16:31:41.739551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:33400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.932 [2024-04-17 16:31:41.739560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.932 [2024-04-17 16:31:41.739572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:118864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.932 [2024-04-17 16:31:41.739581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.932 [2024-04-17 16:31:41.739592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:53344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.932 [2024-04-17 16:31:41.739602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.932 [2024-04-17 16:31:41.739613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:88568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.932 [2024-04-17 16:31:41.739622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.932 [2024-04-17 16:31:41.739633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:122784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.932 [2024-04-17 16:31:41.739642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.932 [2024-04-17 16:31:41.739653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.932 [2024-04-17 16:31:41.739662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.932 [2024-04-17 16:31:41.739673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:121792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.932 [2024-04-17 16:31:41.739682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.932 [2024-04-17 16:31:41.739693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:92696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.932 [2024-04-17 16:31:41.739702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.932 [2024-04-17 16:31:41.739713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:118968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.932 [2024-04-17 16:31:41.739723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.932 [2024-04-17 16:31:41.739734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.932 [2024-04-17 16:31:41.739743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.932 [2024-04-17 16:31:41.739753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.932 [2024-04-17 16:31:41.739763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.932 [2024-04-17 16:31:41.739787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.932 [2024-04-17 16:31:41.739799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.932 [2024-04-17 16:31:41.739811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:112232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.932 [2024-04-17 16:31:41.739820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.932 [2024-04-17 16:31:41.739831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:39792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.932 [2024-04-17 16:31:41.739842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.932 [2024-04-17 16:31:41.739864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:91120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.932 [2024-04-17 16:31:41.739874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.932 [2024-04-17 16:31:41.739885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.932 [2024-04-17 16:31:41.739895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.932 [2024-04-17 16:31:41.739906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.932 [2024-04-17 16:31:41.739915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.932 [2024-04-17 16:31:41.739926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:75232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.932 [2024-04-17 16:31:41.739935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.932 [2024-04-17 16:31:41.739946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.932 [2024-04-17 16:31:41.739956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.932 [2024-04-17 16:31:41.739967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.932 [2024-04-17 16:31:41.739976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.932 [2024-04-17 16:31:41.739988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:75776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.932 [2024-04-17 16:31:41.739997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.932 [2024-04-17 16:31:41.740008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.932 [2024-04-17 16:31:41.740017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.932 [2024-04-17 16:31:41.740027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.932 [2024-04-17 16:31:41.740036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.932 [2024-04-17 16:31:41.740047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.932 [2024-04-17 16:31:41.740056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.932 [2024-04-17 16:31:41.740067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:68312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.932 [2024-04-17 16:31:41.740076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.932 [2024-04-17 16:31:41.740087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.932 [2024-04-17 16:31:41.740096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.932 [2024-04-17 16:31:41.740107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:92480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.932 [2024-04-17 16:31:41.740116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.932 [2024-04-17 16:31:41.740127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.932 [2024-04-17 16:31:41.740136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.932 [2024-04-17 16:31:41.740148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:69656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.932 [2024-04-17 16:31:41.740157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.932 [2024-04-17 16:31:41.740168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.932 [2024-04-17 16:31:41.740178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.932 [2024-04-17 16:31:41.740189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:130416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.932 [2024-04-17 16:31:41.740199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.932 [2024-04-17 16:31:41.740210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.932 [2024-04-17 16:31:41.740219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.932 [2024-04-17 16:31:41.740230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.932 [2024-04-17 16:31:41.740240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.932 [2024-04-17 16:31:41.740251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:128400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.932 [2024-04-17 16:31:41.740260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.932 [2024-04-17 16:31:41.740271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:29496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.932 [2024-04-17 16:31:41.740281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.932 [2024-04-17 16:31:41.740292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:61408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.932 [2024-04-17 16:31:41.740302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.932 [2024-04-17 16:31:41.740312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:115256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.932 [2024-04-17 16:31:41.740321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.932 [2024-04-17 16:31:41.740332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:29976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.932 [2024-04-17 16:31:41.740341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.932 [2024-04-17 16:31:41.740352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:129528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.932 [2024-04-17 16:31:41.740362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.932 [2024-04-17 16:31:41.740373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:67632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.933 [2024-04-17 16:31:41.740390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.933 [2024-04-17 16:31:41.740402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.933 [2024-04-17 16:31:41.740411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.933 [2024-04-17 16:31:41.740421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:94408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.933 [2024-04-17 16:31:41.740431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.933 [2024-04-17 16:31:41.740442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:106680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.933 [2024-04-17 16:31:41.740451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.933 [2024-04-17 16:31:41.740462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:40384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.933 [2024-04-17 16:31:41.740471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.933 [2024-04-17 16:31:41.740482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:109072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.933 [2024-04-17 16:31:41.740492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.933 [2024-04-17 16:31:41.740512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.933 [2024-04-17 16:31:41.740521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.933 [2024-04-17 16:31:41.740533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.933 [2024-04-17 16:31:41.740542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.933 [2024-04-17 16:31:41.740552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:121696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.933 [2024-04-17 16:31:41.740561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.933 [2024-04-17 16:31:41.740572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:126600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.933 [2024-04-17 16:31:41.740582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.933 [2024-04-17 16:31:41.740593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.933 [2024-04-17 16:31:41.740602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.933 [2024-04-17 16:31:41.740613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:75624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.933 [2024-04-17 16:31:41.740622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.933 [2024-04-17 16:31:41.740633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:102256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.933 [2024-04-17 16:31:41.740643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.933 [2024-04-17 16:31:41.740654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:84792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.933 [2024-04-17 16:31:41.740663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.933 [2024-04-17 16:31:41.740674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:77048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.933 [2024-04-17 16:31:41.740684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.933 [2024-04-17 16:31:41.740694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:94920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.933 [2024-04-17 16:31:41.740703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.933 [2024-04-17 16:31:41.740714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:15288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.933 [2024-04-17 16:31:41.740723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.933 [2024-04-17 16:31:41.740734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:119616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.933 [2024-04-17 16:31:41.740743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.933 [2024-04-17 16:31:41.740754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:41736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.933 [2024-04-17 16:31:41.740764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.933 [2024-04-17 16:31:41.740785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:128224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.933 [2024-04-17 16:31:41.740796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.933 [2024-04-17 16:31:41.740807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.933 [2024-04-17 16:31:41.740816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.933 [2024-04-17 16:31:41.740827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.933 [2024-04-17 16:31:41.740836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.933 [2024-04-17 16:31:41.740851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.933 [2024-04-17 16:31:41.740861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.933 [2024-04-17 16:31:41.740872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.933 [2024-04-17 16:31:41.740881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.933 [2024-04-17 16:31:41.740892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:58824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.933 [2024-04-17 16:31:41.740901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.933 [2024-04-17 16:31:41.740912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.933 [2024-04-17 16:31:41.740921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.933 [2024-04-17 16:31:41.740932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.933 [2024-04-17 16:31:41.740941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.933 [2024-04-17 16:31:41.740952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:28360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.933 [2024-04-17 16:31:41.740961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.933 [2024-04-17 16:31:41.740972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:129208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.933 [2024-04-17 16:31:41.740981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.933 [2024-04-17 16:31:41.740992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.933 [2024-04-17 16:31:41.741001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.933 [2024-04-17 16:31:41.741013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.933 [2024-04-17 16:31:41.741022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.933 [2024-04-17 16:31:41.741033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:36344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.933 [2024-04-17 16:31:41.741042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.933 [2024-04-17 16:31:41.741053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.933 [2024-04-17 16:31:41.741062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.933 [2024-04-17 16:31:41.741074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:80768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.933 [2024-04-17 16:31:41.741083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.933 [2024-04-17 16:31:41.741094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:91104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.933 [2024-04-17 16:31:41.741103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.933 [2024-04-17 16:31:41.741114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.933 [2024-04-17 16:31:41.741123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.933 [2024-04-17 16:31:41.741134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.933 [2024-04-17 16:31:41.741143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.933 [2024-04-17 16:31:41.741154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:127720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.933 [2024-04-17 16:31:41.741162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.933 [2024-04-17 16:31:41.741178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:61288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.933 [2024-04-17 16:31:41.741187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.934 [2024-04-17 16:31:41.741198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:87936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.934 [2024-04-17 16:31:41.741207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.934 [2024-04-17 16:31:41.741218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:57768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.934 [2024-04-17 16:31:41.741228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.934 [2024-04-17 16:31:41.741239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.934 [2024-04-17 16:31:41.741248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.934 [2024-04-17 16:31:41.741258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:29528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.934 [2024-04-17 16:31:41.741267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.934 [2024-04-17 16:31:41.741278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.934 [2024-04-17 16:31:41.741287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.934 [2024-04-17 16:31:41.741298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.934 [2024-04-17 16:31:41.741307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.934 [2024-04-17 16:31:41.741318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:114080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.934 [2024-04-17 16:31:41.741327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.934 [2024-04-17 16:31:41.741338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:84440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.934 [2024-04-17 16:31:41.741347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.934 [2024-04-17 16:31:41.741357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.934 [2024-04-17 16:31:41.741367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.934 [2024-04-17 16:31:41.741378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:74936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.934 [2024-04-17 16:31:41.741387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.934 [2024-04-17 16:31:41.741398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:71648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.934 [2024-04-17 16:31:41.741407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.934 [2024-04-17 16:31:41.741418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.934 [2024-04-17 16:31:41.741427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.934 [2024-04-17 16:31:41.741454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:07.934 [2024-04-17 16:31:41.741465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59192 len:8 PRP1 0x0 PRP2 0x0 00:20:07.934 [2024-04-17 16:31:41.741479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.934 [2024-04-17 16:31:41.741492] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:07.934 [2024-04-17 16:31:41.741500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:07.934 [2024-04-17 16:31:41.741508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7904 len:8 PRP1 0x0 PRP2 0x0 00:20:07.934 [2024-04-17 16:31:41.741522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.934 [2024-04-17 16:31:41.741531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:07.934 [2024-04-17 16:31:41.741539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:07.934 [2024-04-17 16:31:41.741547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113608 len:8 PRP1 0x0 PRP2 0x0 00:20:07.934 [2024-04-17 16:31:41.741555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.934 [2024-04-17 16:31:41.741564] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:07.934 [2024-04-17 16:31:41.741571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:07.934 [2024-04-17 16:31:41.741579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118800 len:8 PRP1 0x0 PRP2 0x0 00:20:07.934 [2024-04-17 16:31:41.741588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.934 [2024-04-17 16:31:41.741597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:07.934 [2024-04-17 16:31:41.741609] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:07.934 [2024-04-17 16:31:41.741617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23232 len:8 PRP1 0x0 PRP2 0x0 00:20:07.934 [2024-04-17 16:31:41.741625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.934 [2024-04-17 16:31:41.741635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:07.934 [2024-04-17 16:31:41.741642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:07.934 [2024-04-17 16:31:41.741650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95240 len:8 PRP1 0x0 PRP2 0x0 00:20:07.934 [2024-04-17 16:31:41.741658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.934 [2024-04-17 16:31:41.741668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:07.934 [2024-04-17 16:31:41.741675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:07.934 [2024-04-17 16:31:41.741683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22432 len:8 PRP1 0x0 PRP2 0x0 00:20:07.934 [2024-04-17 16:31:41.741691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.934 [2024-04-17 16:31:41.741700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:07.934 [2024-04-17 16:31:41.741707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:07.934 [2024-04-17 16:31:41.741715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32136 len:8 PRP1 0x0 PRP2 0x0 00:20:07.934 [2024-04-17 16:31:41.741724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.934 [2024-04-17 16:31:41.741733] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:07.934 [2024-04-17 16:31:41.741740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:07.934 [2024-04-17 16:31:41.741748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51984 len:8 PRP1 0x0 PRP2 0x0 00:20:07.934 [2024-04-17 16:31:41.741756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.934 [2024-04-17 16:31:41.741765] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:07.934 [2024-04-17 16:31:41.741783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:07.934 [2024-04-17 16:31:41.741791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23688 len:8 PRP1 0x0 PRP2 0x0 00:20:07.934 [2024-04-17 16:31:41.741805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.934 [2024-04-17 16:31:41.741815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:07.934 [2024-04-17 16:31:41.741822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:07.934 [2024-04-17 16:31:41.741830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33032 len:8 PRP1 0x0 PRP2 0x0 00:20:07.934 [2024-04-17 16:31:41.741839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.934 [2024-04-17 16:31:41.741848] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:07.934 [2024-04-17 16:31:41.741855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:07.934 [2024-04-17 16:31:41.741862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63696 len:8 PRP1 0x0 PRP2 0x0 00:20:07.934 [2024-04-17 16:31:41.741871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.934 [2024-04-17 16:31:41.741880] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:07.934 [2024-04-17 16:31:41.741892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:07.934 [2024-04-17 16:31:41.741900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98208 len:8 PRP1 0x0 PRP2 0x0 00:20:07.935 [2024-04-17 16:31:41.741909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.935 [2024-04-17 16:31:41.741918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:07.935 [2024-04-17 16:31:41.741925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:07.935 [2024-04-17 16:31:41.741933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50232 len:8 PRP1 0x0 PRP2 0x0 00:20:07.935 [2024-04-17 16:31:41.741941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.935 [2024-04-17 16:31:41.741950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:07.935 [2024-04-17 16:31:41.741957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:07.935 [2024-04-17 16:31:41.741964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42304 len:8 PRP1 0x0 PRP2 0x0 00:20:07.935 [2024-04-17 16:31:41.741973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.935 [2024-04-17 16:31:41.741982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:07.935 [2024-04-17 16:31:41.741988] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:07.935 [2024-04-17 16:31:41.741996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25976 len:8 PRP1 0x0 PRP2 0x0 00:20:07.935 [2024-04-17 16:31:41.742005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.935 [2024-04-17 16:31:41.742014] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:07.935 [2024-04-17 16:31:41.742029] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:07.935 [2024-04-17 16:31:41.742036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73584 len:8 PRP1 0x0 PRP2 0x0 00:20:07.935 [2024-04-17 16:31:41.742044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.935 [2024-04-17 16:31:41.742055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:07.935 [2024-04-17 16:31:41.742062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:07.935 [2024-04-17 16:31:41.742070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96872 len:8 PRP1 0x0 PRP2 0x0 00:20:07.935 [2024-04-17 16:31:41.742093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.935 [2024-04-17 16:31:41.742103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:07.935 [2024-04-17 16:31:41.742111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:07.935 [2024-04-17 16:31:41.742118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47352 len:8 PRP1 0x0 PRP2 0x0 00:20:07.935 [2024-04-17 16:31:41.742127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.935 [2024-04-17 16:31:41.742136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:07.935 [2024-04-17 16:31:41.742143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:07.935 [2024-04-17 16:31:41.742151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70304 len:8 PRP1 0x0 PRP2 0x0 00:20:07.935 [2024-04-17 16:31:41.742159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.935 [2024-04-17 16:31:41.742168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:07.935 [2024-04-17 16:31:41.742180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:07.935 [2024-04-17 16:31:41.742187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57568 len:8 PRP1 0x0 PRP2 0x0 00:20:07.935 [2024-04-17 16:31:41.742196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.935 [2024-04-17 16:31:41.742205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:07.935 [2024-04-17 16:31:41.742212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:07.935 [2024-04-17 16:31:41.742220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82408 len:8 PRP1 0x0 PRP2 0x0 00:20:07.935 [2024-04-17 16:31:41.742229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.935 [2024-04-17 16:31:41.742238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:07.935 [2024-04-17 16:31:41.742245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:07.935 [2024-04-17 16:31:41.742252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43112 len:8 PRP1 0x0 PRP2 0x0 00:20:07.935 [2024-04-17 16:31:41.742261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.935 [2024-04-17 16:31:41.742269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:07.935 [2024-04-17 16:31:41.742277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:07.935 [2024-04-17 16:31:41.755899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125648 len:8 PRP1 0x0 PRP2 0x0 00:20:07.935 [2024-04-17 16:31:41.755958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.935 [2024-04-17 16:31:41.755980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:07.935 [2024-04-17 16:31:41.755992] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:07.935 [2024-04-17 16:31:41.756002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121336 len:8 PRP1 0x0 PRP2 0x0 00:20:07.935 [2024-04-17 16:31:41.756014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.935 [2024-04-17 16:31:41.756026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:07.935 [2024-04-17 16:31:41.756035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:07.935 [2024-04-17 16:31:41.756045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16656 len:8 PRP1 0x0 PRP2 0x0 00:20:07.935 [2024-04-17 16:31:41.756057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.935 [2024-04-17 16:31:41.756069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:07.935 [2024-04-17 16:31:41.756078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:07.935 [2024-04-17 16:31:41.756088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34624 len:8 PRP1 0x0 PRP2 0x0 00:20:07.935 [2024-04-17 16:31:41.756099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.935 [2024-04-17 16:31:41.756110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:07.935 [2024-04-17 16:31:41.756119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:07.935 [2024-04-17 16:31:41.756128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58408 len:8 PRP1 0x0 PRP2 0x0 00:20:07.935 [2024-04-17 16:31:41.756139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.935 [2024-04-17 16:31:41.756150] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:07.935 [2024-04-17 16:31:41.756160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:07.935 [2024-04-17 16:31:41.756170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86048 len:8 PRP1 0x0 PRP2 0x0 00:20:07.935 [2024-04-17 16:31:41.756180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.935 [2024-04-17 16:31:41.756192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:07.935 [2024-04-17 16:31:41.756201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:07.935 [2024-04-17 16:31:41.756211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94648 len:8 PRP1 0x0 PRP2 0x0 00:20:07.936 [2024-04-17 16:31:41.756234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.936 [2024-04-17 16:31:41.756245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:07.936 [2024-04-17 16:31:41.756254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:07.936 [2024-04-17 16:31:41.756264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32952 len:8 PRP1 0x0 PRP2 0x0 00:20:07.936 [2024-04-17 16:31:41.756275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.936 [2024-04-17 16:31:41.756286] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:07.936 [2024-04-17 16:31:41.756295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:07.936 [2024-04-17 16:31:41.756305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86056 len:8 PRP1 0x0 PRP2 0x0 00:20:07.936 [2024-04-17 16:31:41.756315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.936 [2024-04-17 16:31:41.756327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:07.936 [2024-04-17 16:31:41.756335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:07.936 [2024-04-17 16:31:41.756345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125000 len:8 PRP1 0x0 PRP2 0x0 00:20:07.936 [2024-04-17 16:31:41.756356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.936 [2024-04-17 16:31:41.756367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:07.936 [2024-04-17 16:31:41.756375] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:07.936 [2024-04-17 16:31:41.756385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61736 len:8 PRP1 0x0 PRP2 0x0 00:20:07.936 [2024-04-17 16:31:41.756396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.936 [2024-04-17 16:31:41.756408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:07.936 [2024-04-17 16:31:41.756417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:07.936 [2024-04-17 16:31:41.756426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51064 len:8 PRP1 0x0 PRP2 0x0 00:20:07.936 [2024-04-17 16:31:41.756438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.936 [2024-04-17 16:31:41.756530] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1387f70 was disconnected and freed. reset controller. 00:20:07.936 [2024-04-17 16:31:41.756688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.936 [2024-04-17 16:31:41.756721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.936 [2024-04-17 16:31:41.756737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.936 [2024-04-17 16:31:41.756749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.936 [2024-04-17 16:31:41.756761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.936 [2024-04-17 16:31:41.756790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.936 [2024-04-17 16:31:41.756806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.936 [2024-04-17 16:31:41.756817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.936 [2024-04-17 16:31:41.756829] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131edc0 is same with the state(5) to be set 00:20:07.936 [2024-04-17 16:31:41.757147] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:07.936 [2024-04-17 16:31:41.757180] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x131edc0 (9): Bad file descriptor 00:20:07.936 [2024-04-17 16:31:41.757307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.936 [2024-04-17 16:31:41.757367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.936 [2024-04-17 16:31:41.757386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131edc0 with addr=10.0.0.2, port=4420 00:20:07.936 [2024-04-17 16:31:41.757399] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131edc0 is same with the state(5) to be set 00:20:07.936 [2024-04-17 16:31:41.757421] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x131edc0 (9): Bad file descriptor 00:20:07.936 [2024-04-17 16:31:41.757440] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:07.936 [2024-04-17 16:31:41.757451] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:07.936 [2024-04-17 16:31:41.757464] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:07.936 [2024-04-17 16:31:41.757487] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.936 [2024-04-17 16:31:41.757499] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:07.936 16:31:41 -- host/timeout.sh@128 -- # wait 89092 00:20:09.842 [2024-04-17 16:31:43.757725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.842 [2024-04-17 16:31:43.757852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.842 [2024-04-17 16:31:43.757873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131edc0 with addr=10.0.0.2, port=4420 00:20:09.842 [2024-04-17 16:31:43.757892] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131edc0 is same with the state(5) to be set 00:20:09.842 [2024-04-17 16:31:43.757920] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x131edc0 (9): Bad file descriptor 00:20:09.842 [2024-04-17 16:31:43.757940] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:09.842 [2024-04-17 16:31:43.757950] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:09.842 [2024-04-17 16:31:43.757961] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:09.842 [2024-04-17 16:31:43.757989] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.842 [2024-04-17 16:31:43.758001] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:11.790 [2024-04-17 16:31:45.758196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.790 [2024-04-17 16:31:45.758291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.790 [2024-04-17 16:31:45.758312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131edc0 with addr=10.0.0.2, port=4420 00:20:11.790 [2024-04-17 16:31:45.758326] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131edc0 is same with the state(5) to be set 00:20:11.790 [2024-04-17 16:31:45.758353] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x131edc0 (9): Bad file descriptor 00:20:11.790 [2024-04-17 16:31:45.758373] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:11.790 [2024-04-17 16:31:45.758382] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:11.790 [2024-04-17 16:31:45.758393] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:11.790 [2024-04-17 16:31:45.758421] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.790 [2024-04-17 16:31:45.758432] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:14.325 [2024-04-17 16:31:47.758561] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:14.892 00:20:14.892 Latency(us) 00:20:14.892 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.892 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:20:14.892 NVMe0n1 : 8.12 2299.69 8.98 15.76 0.00 55343.87 2368.23 7046430.72 00:20:14.892 =================================================================================================================== 00:20:14.892 Total : 2299.69 8.98 15.76 0.00 55343.87 2368.23 7046430.72 00:20:14.892 0 00:20:14.892 16:31:48 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:14.892 Attaching 5 probes... 00:20:14.892 1347.738432: reset bdev controller NVMe0 00:20:14.892 1347.826491: reconnect bdev controller NVMe0 00:20:14.892 3348.166246: reconnect delay bdev controller NVMe0 00:20:14.892 3348.193698: reconnect bdev controller NVMe0 00:20:14.892 5348.641004: reconnect delay bdev controller NVMe0 00:20:14.892 5348.663159: reconnect bdev controller NVMe0 00:20:14.892 7349.101378: reconnect delay bdev controller NVMe0 00:20:14.892 7349.140228: reconnect bdev controller NVMe0 00:20:14.892 16:31:48 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:20:14.892 16:31:48 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:20:14.892 16:31:48 -- host/timeout.sh@136 -- # kill 89038 00:20:14.892 16:31:48 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:14.892 16:31:48 -- host/timeout.sh@139 -- # killprocess 89010 00:20:14.892 16:31:48 -- common/autotest_common.sh@936 -- # '[' -z 89010 ']' 00:20:14.892 16:31:48 -- common/autotest_common.sh@940 -- # kill -0 89010 00:20:14.892 16:31:48 -- common/autotest_common.sh@941 -- # uname 00:20:14.892 16:31:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:14.892 16:31:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89010 00:20:14.892 killing process with pid 89010 00:20:14.892 Received shutdown signal, test time was about 8.178706 seconds 00:20:14.892 00:20:14.892 Latency(us) 00:20:14.892 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.892 =================================================================================================================== 00:20:14.892 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:14.892 16:31:48 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:14.893 16:31:48 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:14.893 16:31:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89010' 00:20:14.893 16:31:48 -- common/autotest_common.sh@955 -- # kill 89010 00:20:14.893 16:31:48 -- common/autotest_common.sh@960 -- # wait 89010 00:20:15.151 16:31:49 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:15.409 16:31:49 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:20:15.409 16:31:49 -- host/timeout.sh@145 -- # nvmftestfini 00:20:15.409 16:31:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:15.409 16:31:49 -- nvmf/common.sh@117 -- # sync 00:20:15.409 16:31:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:15.409 16:31:49 -- nvmf/common.sh@120 -- # set +e 00:20:15.409 16:31:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:15.409 16:31:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:15.409 rmmod nvme_tcp 00:20:15.409 rmmod nvme_fabrics 00:20:15.409 rmmod nvme_keyring 00:20:15.409 16:31:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:15.409 16:31:49 -- nvmf/common.sh@124 -- # set -e 00:20:15.409 16:31:49 -- nvmf/common.sh@125 -- # return 0 00:20:15.409 16:31:49 -- nvmf/common.sh@478 -- # '[' -n 88423 ']' 00:20:15.409 16:31:49 -- nvmf/common.sh@479 -- # killprocess 88423 00:20:15.409 16:31:49 -- common/autotest_common.sh@936 -- # '[' -z 88423 ']' 00:20:15.409 16:31:49 -- common/autotest_common.sh@940 -- # kill -0 88423 00:20:15.409 16:31:49 -- common/autotest_common.sh@941 -- # uname 00:20:15.409 16:31:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:15.409 16:31:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88423 00:20:15.409 killing process with pid 88423 00:20:15.409 16:31:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:15.409 16:31:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:15.409 16:31:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88423' 00:20:15.409 16:31:49 -- common/autotest_common.sh@955 -- # kill 88423 00:20:15.409 16:31:49 -- common/autotest_common.sh@960 -- # wait 88423 00:20:15.667 16:31:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:15.667 16:31:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:15.667 16:31:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:15.667 16:31:49 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:15.667 16:31:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:15.667 16:31:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.667 16:31:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:15.667 16:31:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.926 16:31:49 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:15.926 00:20:15.926 real 0m47.700s 00:20:15.926 user 2m20.730s 00:20:15.926 sys 0m4.994s 00:20:15.926 16:31:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:15.926 16:31:49 -- common/autotest_common.sh@10 -- # set +x 00:20:15.926 ************************************ 00:20:15.926 END TEST nvmf_timeout 00:20:15.926 ************************************ 00:20:15.926 16:31:49 -- nvmf/nvmf.sh@118 -- # [[ virt == phy ]] 00:20:15.926 16:31:49 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:20:15.926 16:31:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:15.926 16:31:49 -- common/autotest_common.sh@10 -- # set +x 00:20:15.926 16:31:49 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:20:15.926 00:20:15.926 real 12m16.727s 00:20:15.926 user 32m32.105s 00:20:15.926 sys 2m47.078s 00:20:15.926 16:31:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:15.926 16:31:49 -- common/autotest_common.sh@10 -- # set +x 00:20:15.926 ************************************ 00:20:15.926 END TEST nvmf_tcp 00:20:15.926 ************************************ 00:20:15.926 16:31:49 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:20:15.926 16:31:49 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:20:15.926 16:31:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:15.926 16:31:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:15.926 16:31:49 -- common/autotest_common.sh@10 -- # set +x 00:20:15.926 ************************************ 00:20:15.926 START TEST spdkcli_nvmf_tcp 00:20:15.926 ************************************ 00:20:15.926 16:31:49 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:20:16.201 * Looking for test storage... 00:20:16.201 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:20:16.201 16:31:50 -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:20:16.201 16:31:50 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:20:16.201 16:31:50 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:20:16.201 16:31:50 -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:16.201 16:31:50 -- nvmf/common.sh@7 -- # uname -s 00:20:16.201 16:31:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:16.201 16:31:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:16.201 16:31:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:16.201 16:31:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:16.201 16:31:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:16.201 16:31:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:16.201 16:31:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:16.201 16:31:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:16.201 16:31:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:16.201 16:31:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:16.201 16:31:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:20:16.201 16:31:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:20:16.201 16:31:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:16.201 16:31:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:16.201 16:31:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:16.201 16:31:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:16.201 16:31:50 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:16.201 16:31:50 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:16.201 16:31:50 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:16.201 16:31:50 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:16.201 16:31:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.201 16:31:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.201 16:31:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.201 16:31:50 -- paths/export.sh@5 -- # export PATH 00:20:16.201 16:31:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.201 16:31:50 -- nvmf/common.sh@47 -- # : 0 00:20:16.201 16:31:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:16.201 16:31:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:16.201 16:31:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:16.201 16:31:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:16.201 16:31:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:16.201 16:31:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:16.201 16:31:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:16.201 16:31:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:16.201 16:31:50 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:20:16.201 16:31:50 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:20:16.201 16:31:50 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:20:16.201 16:31:50 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:20:16.201 16:31:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:16.201 16:31:50 -- common/autotest_common.sh@10 -- # set +x 00:20:16.201 16:31:50 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:20:16.201 16:31:50 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=89315 00:20:16.201 16:31:50 -- spdkcli/common.sh@34 -- # waitforlisten 89315 00:20:16.201 16:31:50 -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:20:16.201 16:31:50 -- common/autotest_common.sh@817 -- # '[' -z 89315 ']' 00:20:16.201 16:31:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.201 16:31:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:16.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.201 16:31:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.201 16:31:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:16.201 16:31:50 -- common/autotest_common.sh@10 -- # set +x 00:20:16.201 [2024-04-17 16:31:50.114301] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:20:16.201 [2024-04-17 16:31:50.114406] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89315 ] 00:20:16.460 [2024-04-17 16:31:50.250292] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:16.460 [2024-04-17 16:31:50.374746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:16.460 [2024-04-17 16:31:50.374757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.027 16:31:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:17.027 16:31:51 -- common/autotest_common.sh@850 -- # return 0 00:20:17.027 16:31:51 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:20:17.027 16:31:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:17.027 16:31:51 -- common/autotest_common.sh@10 -- # set +x 00:20:17.285 16:31:51 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:20:17.285 16:31:51 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:20:17.285 16:31:51 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:20:17.285 16:31:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:17.285 16:31:51 -- common/autotest_common.sh@10 -- # set +x 00:20:17.285 16:31:51 -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:20:17.285 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:20:17.285 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:20:17.285 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:20:17.285 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:20:17.285 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:20:17.285 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:20:17.285 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:20:17.285 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:20:17.285 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:20:17.285 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:20:17.285 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:20:17.285 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:20:17.285 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:20:17.285 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:20:17.285 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:20:17.285 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:20:17.285 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:20:17.285 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:20:17.285 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:20:17.285 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:20:17.285 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:20:17.285 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:20:17.285 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:20:17.285 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:20:17.285 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:20:17.285 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:20:17.285 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:20:17.285 ' 00:20:17.544 [2024-04-17 16:31:51.547980] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:20.073 [2024-04-17 16:31:53.785749] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:21.449 [2024-04-17 16:31:55.066912] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:20:23.402 [2024-04-17 16:31:57.420679] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:20:25.935 [2024-04-17 16:31:59.434157] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:20:27.310 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:20:27.310 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:20:27.310 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:20:27.310 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:20:27.310 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:20:27.310 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:20:27.310 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:20:27.310 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:20:27.310 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:20:27.310 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:20:27.310 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:20:27.310 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:20:27.310 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:20:27.310 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:20:27.310 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:20:27.310 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:20:27.310 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:20:27.310 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:20:27.310 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:20:27.310 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:20:27.310 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:20:27.310 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:20:27.310 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:20:27.310 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:20:27.310 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:20:27.310 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:20:27.310 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:20:27.310 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:20:27.310 16:32:01 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:20:27.310 16:32:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:27.310 16:32:01 -- common/autotest_common.sh@10 -- # set +x 00:20:27.310 16:32:01 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:20:27.310 16:32:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:27.310 16:32:01 -- common/autotest_common.sh@10 -- # set +x 00:20:27.310 16:32:01 -- spdkcli/nvmf.sh@69 -- # check_match 00:20:27.310 16:32:01 -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:20:27.569 16:32:01 -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:20:27.569 16:32:01 -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:20:27.569 16:32:01 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:20:27.569 16:32:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:27.569 16:32:01 -- common/autotest_common.sh@10 -- # set +x 00:20:27.830 16:32:01 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:20:27.830 16:32:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:27.830 16:32:01 -- common/autotest_common.sh@10 -- # set +x 00:20:27.830 16:32:01 -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:20:27.830 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:20:27.830 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:20:27.830 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:20:27.830 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:20:27.830 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:20:27.830 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:20:27.830 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:20:27.830 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:20:27.830 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:20:27.830 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:20:27.830 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:20:27.830 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:20:27.830 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:20:27.830 ' 00:20:33.095 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:20:33.095 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:20:33.095 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:20:33.095 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:20:33.095 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:20:33.095 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:20:33.095 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:20:33.095 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:20:33.095 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:20:33.095 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:20:33.095 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:20:33.095 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:20:33.095 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:20:33.095 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:20:33.354 16:32:07 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:20:33.354 16:32:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:33.354 16:32:07 -- common/autotest_common.sh@10 -- # set +x 00:20:33.354 16:32:07 -- spdkcli/nvmf.sh@90 -- # killprocess 89315 00:20:33.354 16:32:07 -- common/autotest_common.sh@936 -- # '[' -z 89315 ']' 00:20:33.354 16:32:07 -- common/autotest_common.sh@940 -- # kill -0 89315 00:20:33.354 16:32:07 -- common/autotest_common.sh@941 -- # uname 00:20:33.355 16:32:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:33.355 16:32:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89315 00:20:33.355 16:32:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:33.355 16:32:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:33.355 killing process with pid 89315 00:20:33.355 16:32:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89315' 00:20:33.355 16:32:07 -- common/autotest_common.sh@955 -- # kill 89315 00:20:33.355 [2024-04-17 16:32:07.252822] app.c: 930:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:20:33.355 16:32:07 -- common/autotest_common.sh@960 -- # wait 89315 00:20:33.612 16:32:07 -- spdkcli/nvmf.sh@1 -- # cleanup 00:20:33.612 16:32:07 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:20:33.612 16:32:07 -- spdkcli/common.sh@13 -- # '[' -n 89315 ']' 00:20:33.612 16:32:07 -- spdkcli/common.sh@14 -- # killprocess 89315 00:20:33.612 16:32:07 -- common/autotest_common.sh@936 -- # '[' -z 89315 ']' 00:20:33.612 16:32:07 -- common/autotest_common.sh@940 -- # kill -0 89315 00:20:33.612 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (89315) - No such process 00:20:33.612 Process with pid 89315 is not found 00:20:33.612 16:32:07 -- common/autotest_common.sh@963 -- # echo 'Process with pid 89315 is not found' 00:20:33.612 16:32:07 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:20:33.612 16:32:07 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:20:33.612 16:32:07 -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:20:33.612 00:20:33.612 real 0m17.580s 00:20:33.612 user 0m37.898s 00:20:33.612 sys 0m0.931s 00:20:33.612 16:32:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:33.612 16:32:07 -- common/autotest_common.sh@10 -- # set +x 00:20:33.612 ************************************ 00:20:33.612 END TEST spdkcli_nvmf_tcp 00:20:33.612 ************************************ 00:20:33.612 16:32:07 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:20:33.612 16:32:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:33.612 16:32:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:33.612 16:32:07 -- common/autotest_common.sh@10 -- # set +x 00:20:33.612 ************************************ 00:20:33.612 START TEST nvmf_identify_passthru 00:20:33.612 ************************************ 00:20:33.612 16:32:07 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:20:33.871 * Looking for test storage... 00:20:33.871 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:33.871 16:32:07 -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:33.871 16:32:07 -- nvmf/common.sh@7 -- # uname -s 00:20:33.871 16:32:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:33.871 16:32:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:33.871 16:32:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:33.871 16:32:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:33.871 16:32:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:33.871 16:32:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:33.871 16:32:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:33.871 16:32:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:33.871 16:32:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:33.871 16:32:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:33.871 16:32:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:20:33.871 16:32:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:20:33.871 16:32:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:33.871 16:32:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:33.871 16:32:07 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:33.871 16:32:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:33.871 16:32:07 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:33.871 16:32:07 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:33.871 16:32:07 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:33.871 16:32:07 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:33.871 16:32:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.871 16:32:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.871 16:32:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.871 16:32:07 -- paths/export.sh@5 -- # export PATH 00:20:33.871 16:32:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.871 16:32:07 -- nvmf/common.sh@47 -- # : 0 00:20:33.871 16:32:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:33.871 16:32:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:33.871 16:32:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:33.871 16:32:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:33.871 16:32:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:33.871 16:32:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:33.871 16:32:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:33.871 16:32:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:33.871 16:32:07 -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:33.871 16:32:07 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:33.871 16:32:07 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:33.871 16:32:07 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:33.871 16:32:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.871 16:32:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.871 16:32:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.871 16:32:07 -- paths/export.sh@5 -- # export PATH 00:20:33.871 16:32:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.871 16:32:07 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:20:33.871 16:32:07 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:33.871 16:32:07 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:33.871 16:32:07 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:33.871 16:32:07 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:33.871 16:32:07 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:33.871 16:32:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.871 16:32:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:33.871 16:32:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:33.871 16:32:07 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:20:33.871 16:32:07 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:20:33.871 16:32:07 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:20:33.871 16:32:07 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:20:33.871 16:32:07 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:20:33.871 16:32:07 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:20:33.871 16:32:07 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:33.871 16:32:07 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:33.871 16:32:07 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:33.871 16:32:07 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:33.871 16:32:07 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:33.871 16:32:07 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:33.871 16:32:07 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:33.871 16:32:07 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:33.871 16:32:07 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:33.871 16:32:07 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:33.871 16:32:07 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:33.871 16:32:07 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:33.871 16:32:07 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:33.871 16:32:07 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:33.871 Cannot find device "nvmf_tgt_br" 00:20:33.871 16:32:07 -- nvmf/common.sh@155 -- # true 00:20:33.871 16:32:07 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:33.871 Cannot find device "nvmf_tgt_br2" 00:20:33.871 16:32:07 -- nvmf/common.sh@156 -- # true 00:20:33.871 16:32:07 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:33.871 16:32:07 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:33.871 Cannot find device "nvmf_tgt_br" 00:20:33.871 16:32:07 -- nvmf/common.sh@158 -- # true 00:20:33.871 16:32:07 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:33.871 Cannot find device "nvmf_tgt_br2" 00:20:33.871 16:32:07 -- nvmf/common.sh@159 -- # true 00:20:33.871 16:32:07 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:33.872 16:32:07 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:33.872 16:32:07 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:33.872 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:33.872 16:32:07 -- nvmf/common.sh@162 -- # true 00:20:33.872 16:32:07 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:33.872 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:33.872 16:32:07 -- nvmf/common.sh@163 -- # true 00:20:33.872 16:32:07 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:33.872 16:32:07 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:33.872 16:32:07 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:33.872 16:32:07 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:33.872 16:32:07 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:34.130 16:32:07 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:34.130 16:32:07 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:34.130 16:32:07 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:34.130 16:32:07 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:34.130 16:32:07 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:34.130 16:32:07 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:34.130 16:32:07 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:34.130 16:32:07 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:34.130 16:32:07 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:34.130 16:32:08 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:34.130 16:32:08 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:34.130 16:32:08 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:34.130 16:32:08 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:34.130 16:32:08 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:34.130 16:32:08 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:34.130 16:32:08 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:34.130 16:32:08 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:34.130 16:32:08 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:34.130 16:32:08 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:34.130 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:34.130 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:20:34.130 00:20:34.130 --- 10.0.0.2 ping statistics --- 00:20:34.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.130 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:20:34.130 16:32:08 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:34.130 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:34.130 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:20:34.130 00:20:34.130 --- 10.0.0.3 ping statistics --- 00:20:34.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.130 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:20:34.130 16:32:08 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:34.130 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:34.130 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:20:34.130 00:20:34.130 --- 10.0.0.1 ping statistics --- 00:20:34.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.130 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:20:34.130 16:32:08 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:34.130 16:32:08 -- nvmf/common.sh@422 -- # return 0 00:20:34.130 16:32:08 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:34.130 16:32:08 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:34.130 16:32:08 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:34.130 16:32:08 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:34.130 16:32:08 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:34.130 16:32:08 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:34.130 16:32:08 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:34.130 16:32:08 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:20:34.130 16:32:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:34.130 16:32:08 -- common/autotest_common.sh@10 -- # set +x 00:20:34.130 16:32:08 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:20:34.130 16:32:08 -- common/autotest_common.sh@1510 -- # bdfs=() 00:20:34.130 16:32:08 -- common/autotest_common.sh@1510 -- # local bdfs 00:20:34.130 16:32:08 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:20:34.130 16:32:08 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:20:34.130 16:32:08 -- common/autotest_common.sh@1499 -- # bdfs=() 00:20:34.130 16:32:08 -- common/autotest_common.sh@1499 -- # local bdfs 00:20:34.130 16:32:08 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:20:34.130 16:32:08 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:34.131 16:32:08 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:20:34.131 16:32:08 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:20:34.131 16:32:08 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:20:34.389 16:32:08 -- common/autotest_common.sh@1513 -- # echo 0000:00:10.0 00:20:34.389 16:32:08 -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:20:34.389 16:32:08 -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:20:34.389 16:32:08 -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:20:34.389 16:32:08 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:20:34.389 16:32:08 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:20:34.389 16:32:08 -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:20:34.389 16:32:08 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:20:34.389 16:32:08 -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:20:34.389 16:32:08 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:20:34.647 16:32:08 -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:20:34.647 16:32:08 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:20:34.647 16:32:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:34.647 16:32:08 -- common/autotest_common.sh@10 -- # set +x 00:20:34.647 16:32:08 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:20:34.647 16:32:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:34.647 16:32:08 -- common/autotest_common.sh@10 -- # set +x 00:20:34.647 16:32:08 -- target/identify_passthru.sh@31 -- # nvmfpid=89820 00:20:34.647 16:32:08 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:34.647 16:32:08 -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:34.647 16:32:08 -- target/identify_passthru.sh@35 -- # waitforlisten 89820 00:20:34.647 16:32:08 -- common/autotest_common.sh@817 -- # '[' -z 89820 ']' 00:20:34.647 16:32:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.647 16:32:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:34.647 16:32:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.647 16:32:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:34.647 16:32:08 -- common/autotest_common.sh@10 -- # set +x 00:20:34.647 [2024-04-17 16:32:08.622967] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:20:34.647 [2024-04-17 16:32:08.623063] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:34.905 [2024-04-17 16:32:08.761422] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:34.905 [2024-04-17 16:32:08.886623] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:34.905 [2024-04-17 16:32:08.886681] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:34.905 [2024-04-17 16:32:08.886693] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:34.905 [2024-04-17 16:32:08.886702] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:34.905 [2024-04-17 16:32:08.886709] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:34.905 [2024-04-17 16:32:08.886811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:34.905 [2024-04-17 16:32:08.886866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:34.905 [2024-04-17 16:32:08.886990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:34.905 [2024-04-17 16:32:08.886994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.840 16:32:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:35.840 16:32:09 -- common/autotest_common.sh@850 -- # return 0 00:20:35.840 16:32:09 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:20:35.841 16:32:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.841 16:32:09 -- common/autotest_common.sh@10 -- # set +x 00:20:35.841 16:32:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.841 16:32:09 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:20:35.841 16:32:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.841 16:32:09 -- common/autotest_common.sh@10 -- # set +x 00:20:35.841 [2024-04-17 16:32:09.808157] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:20:35.841 16:32:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.841 16:32:09 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:35.841 16:32:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.841 16:32:09 -- common/autotest_common.sh@10 -- # set +x 00:20:35.841 [2024-04-17 16:32:09.818209] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:35.841 16:32:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.841 16:32:09 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:20:35.841 16:32:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:35.841 16:32:09 -- common/autotest_common.sh@10 -- # set +x 00:20:35.841 16:32:09 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:20:35.841 16:32:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.841 16:32:09 -- common/autotest_common.sh@10 -- # set +x 00:20:36.099 Nvme0n1 00:20:36.099 16:32:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.099 16:32:09 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:20:36.099 16:32:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.099 16:32:09 -- common/autotest_common.sh@10 -- # set +x 00:20:36.099 16:32:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.099 16:32:09 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:36.099 16:32:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.099 16:32:09 -- common/autotest_common.sh@10 -- # set +x 00:20:36.099 16:32:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.099 16:32:09 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:36.099 16:32:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.099 16:32:09 -- common/autotest_common.sh@10 -- # set +x 00:20:36.099 [2024-04-17 16:32:09.963256] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:36.099 16:32:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.099 16:32:09 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:20:36.099 16:32:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.099 16:32:09 -- common/autotest_common.sh@10 -- # set +x 00:20:36.099 [2024-04-17 16:32:09.970982] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:36.099 [ 00:20:36.099 { 00:20:36.099 "allow_any_host": true, 00:20:36.099 "hosts": [], 00:20:36.099 "listen_addresses": [], 00:20:36.099 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:36.099 "subtype": "Discovery" 00:20:36.099 }, 00:20:36.099 { 00:20:36.099 "allow_any_host": true, 00:20:36.099 "hosts": [], 00:20:36.099 "listen_addresses": [ 00:20:36.099 { 00:20:36.099 "adrfam": "IPv4", 00:20:36.099 "traddr": "10.0.0.2", 00:20:36.099 "transport": "TCP", 00:20:36.099 "trsvcid": "4420", 00:20:36.099 "trtype": "TCP" 00:20:36.099 } 00:20:36.099 ], 00:20:36.099 "max_cntlid": 65519, 00:20:36.099 "max_namespaces": 1, 00:20:36.099 "min_cntlid": 1, 00:20:36.099 "model_number": "SPDK bdev Controller", 00:20:36.099 "namespaces": [ 00:20:36.099 { 00:20:36.099 "bdev_name": "Nvme0n1", 00:20:36.099 "name": "Nvme0n1", 00:20:36.099 "nguid": "D3FAA61383AA4E49A2B6A3E9BACA6F3D", 00:20:36.099 "nsid": 1, 00:20:36.099 "uuid": "d3faa613-83aa-4e49-a2b6-a3e9baca6f3d" 00:20:36.099 } 00:20:36.099 ], 00:20:36.099 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.099 "serial_number": "SPDK00000000000001", 00:20:36.099 "subtype": "NVMe" 00:20:36.099 } 00:20:36.099 ] 00:20:36.099 16:32:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.099 16:32:09 -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:36.099 16:32:09 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:20:36.099 16:32:09 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:20:36.356 16:32:10 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:20:36.356 16:32:10 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:20:36.356 16:32:10 -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:36.356 16:32:10 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:20:36.616 16:32:10 -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:20:36.616 16:32:10 -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:20:36.616 16:32:10 -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:20:36.616 16:32:10 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:36.616 16:32:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.616 16:32:10 -- common/autotest_common.sh@10 -- # set +x 00:20:36.616 16:32:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.616 16:32:10 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:20:36.616 16:32:10 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:20:36.616 16:32:10 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:36.616 16:32:10 -- nvmf/common.sh@117 -- # sync 00:20:36.616 16:32:10 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:36.616 16:32:10 -- nvmf/common.sh@120 -- # set +e 00:20:36.616 16:32:10 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:36.616 16:32:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:36.616 rmmod nvme_tcp 00:20:36.616 rmmod nvme_fabrics 00:20:36.616 rmmod nvme_keyring 00:20:36.616 16:32:10 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:36.616 16:32:10 -- nvmf/common.sh@124 -- # set -e 00:20:36.616 16:32:10 -- nvmf/common.sh@125 -- # return 0 00:20:36.616 16:32:10 -- nvmf/common.sh@478 -- # '[' -n 89820 ']' 00:20:36.616 16:32:10 -- nvmf/common.sh@479 -- # killprocess 89820 00:20:36.616 16:32:10 -- common/autotest_common.sh@936 -- # '[' -z 89820 ']' 00:20:36.616 16:32:10 -- common/autotest_common.sh@940 -- # kill -0 89820 00:20:36.616 16:32:10 -- common/autotest_common.sh@941 -- # uname 00:20:36.616 16:32:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:36.616 16:32:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89820 00:20:36.616 killing process with pid 89820 00:20:36.616 16:32:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:36.616 16:32:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:36.616 16:32:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89820' 00:20:36.616 16:32:10 -- common/autotest_common.sh@955 -- # kill 89820 00:20:36.616 [2024-04-17 16:32:10.546175] app.c: 930:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:20:36.616 16:32:10 -- common/autotest_common.sh@960 -- # wait 89820 00:20:36.876 16:32:10 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:36.876 16:32:10 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:36.876 16:32:10 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:36.876 16:32:10 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:36.876 16:32:10 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:36.876 16:32:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.876 16:32:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:36.876 16:32:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.876 16:32:10 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:36.876 ************************************ 00:20:36.876 END TEST nvmf_identify_passthru 00:20:36.876 ************************************ 00:20:36.876 00:20:36.876 real 0m3.222s 00:20:36.876 user 0m8.032s 00:20:36.876 sys 0m0.772s 00:20:36.876 16:32:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:36.876 16:32:10 -- common/autotest_common.sh@10 -- # set +x 00:20:36.876 16:32:10 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:36.876 16:32:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:36.876 16:32:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:36.877 16:32:10 -- common/autotest_common.sh@10 -- # set +x 00:20:37.137 ************************************ 00:20:37.137 START TEST nvmf_dif 00:20:37.137 ************************************ 00:20:37.137 16:32:10 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:37.137 * Looking for test storage... 00:20:37.137 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:37.137 16:32:11 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:37.137 16:32:11 -- nvmf/common.sh@7 -- # uname -s 00:20:37.137 16:32:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:37.137 16:32:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:37.137 16:32:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:37.137 16:32:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:37.137 16:32:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:37.137 16:32:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:37.137 16:32:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:37.137 16:32:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:37.137 16:32:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:37.137 16:32:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:37.137 16:32:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:20:37.137 16:32:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:20:37.137 16:32:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:37.137 16:32:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:37.137 16:32:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:37.137 16:32:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:37.137 16:32:11 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:37.137 16:32:11 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:37.137 16:32:11 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:37.137 16:32:11 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:37.137 16:32:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.137 16:32:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.137 16:32:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.137 16:32:11 -- paths/export.sh@5 -- # export PATH 00:20:37.137 16:32:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.137 16:32:11 -- nvmf/common.sh@47 -- # : 0 00:20:37.137 16:32:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:37.137 16:32:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:37.137 16:32:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:37.137 16:32:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:37.137 16:32:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:37.137 16:32:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:37.137 16:32:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:37.137 16:32:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:37.137 16:32:11 -- target/dif.sh@15 -- # NULL_META=16 00:20:37.137 16:32:11 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:20:37.137 16:32:11 -- target/dif.sh@15 -- # NULL_SIZE=64 00:20:37.137 16:32:11 -- target/dif.sh@15 -- # NULL_DIF=1 00:20:37.137 16:32:11 -- target/dif.sh@135 -- # nvmftestinit 00:20:37.137 16:32:11 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:37.137 16:32:11 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:37.137 16:32:11 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:37.137 16:32:11 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:37.137 16:32:11 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:37.137 16:32:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.137 16:32:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:37.137 16:32:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.137 16:32:11 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:20:37.137 16:32:11 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:20:37.137 16:32:11 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:20:37.137 16:32:11 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:20:37.137 16:32:11 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:20:37.137 16:32:11 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:20:37.137 16:32:11 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:37.137 16:32:11 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:37.137 16:32:11 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:37.137 16:32:11 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:37.137 16:32:11 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:37.137 16:32:11 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:37.137 16:32:11 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:37.137 16:32:11 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:37.137 16:32:11 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:37.137 16:32:11 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:37.137 16:32:11 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:37.137 16:32:11 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:37.137 16:32:11 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:37.137 16:32:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:37.137 Cannot find device "nvmf_tgt_br" 00:20:37.137 16:32:11 -- nvmf/common.sh@155 -- # true 00:20:37.137 16:32:11 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:37.137 Cannot find device "nvmf_tgt_br2" 00:20:37.137 16:32:11 -- nvmf/common.sh@156 -- # true 00:20:37.137 16:32:11 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:37.137 16:32:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:37.137 Cannot find device "nvmf_tgt_br" 00:20:37.137 16:32:11 -- nvmf/common.sh@158 -- # true 00:20:37.137 16:32:11 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:37.137 Cannot find device "nvmf_tgt_br2" 00:20:37.137 16:32:11 -- nvmf/common.sh@159 -- # true 00:20:37.137 16:32:11 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:37.137 16:32:11 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:37.137 16:32:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:37.137 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:37.137 16:32:11 -- nvmf/common.sh@162 -- # true 00:20:37.137 16:32:11 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:37.397 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:37.397 16:32:11 -- nvmf/common.sh@163 -- # true 00:20:37.397 16:32:11 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:37.397 16:32:11 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:37.397 16:32:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:37.397 16:32:11 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:37.397 16:32:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:37.397 16:32:11 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:37.397 16:32:11 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:37.397 16:32:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:37.397 16:32:11 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:37.397 16:32:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:37.397 16:32:11 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:37.397 16:32:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:37.397 16:32:11 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:37.397 16:32:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:37.397 16:32:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:37.397 16:32:11 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:37.397 16:32:11 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:37.397 16:32:11 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:37.397 16:32:11 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:37.397 16:32:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:37.397 16:32:11 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:37.397 16:32:11 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:37.397 16:32:11 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:37.397 16:32:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:37.397 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:37.397 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:20:37.397 00:20:37.397 --- 10.0.0.2 ping statistics --- 00:20:37.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.397 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:20:37.397 16:32:11 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:37.397 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:37.397 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:20:37.397 00:20:37.397 --- 10.0.0.3 ping statistics --- 00:20:37.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.397 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:20:37.397 16:32:11 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:37.397 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:37.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:20:37.397 00:20:37.397 --- 10.0.0.1 ping statistics --- 00:20:37.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.397 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:20:37.397 16:32:11 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:37.397 16:32:11 -- nvmf/common.sh@422 -- # return 0 00:20:37.397 16:32:11 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:20:37.397 16:32:11 -- nvmf/common.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:37.655 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:37.655 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:37.655 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:37.655 16:32:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:37.655 16:32:11 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:37.655 16:32:11 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:37.655 16:32:11 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:37.655 16:32:11 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:37.655 16:32:11 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:37.914 16:32:11 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:20:37.914 16:32:11 -- target/dif.sh@137 -- # nvmfappstart 00:20:37.914 16:32:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:37.914 16:32:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:37.914 16:32:11 -- common/autotest_common.sh@10 -- # set +x 00:20:37.914 16:32:11 -- nvmf/common.sh@470 -- # nvmfpid=90177 00:20:37.914 16:32:11 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:37.914 16:32:11 -- nvmf/common.sh@471 -- # waitforlisten 90177 00:20:37.914 16:32:11 -- common/autotest_common.sh@817 -- # '[' -z 90177 ']' 00:20:37.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:37.914 16:32:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.914 16:32:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:37.914 16:32:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.914 16:32:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:37.914 16:32:11 -- common/autotest_common.sh@10 -- # set +x 00:20:37.914 [2024-04-17 16:32:11.764396] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:20:37.914 [2024-04-17 16:32:11.764489] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:37.914 [2024-04-17 16:32:11.900446] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.173 [2024-04-17 16:32:12.028155] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.173 [2024-04-17 16:32:12.028227] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.173 [2024-04-17 16:32:12.028242] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:38.173 [2024-04-17 16:32:12.028252] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:38.173 [2024-04-17 16:32:12.028261] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.173 [2024-04-17 16:32:12.028301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.740 16:32:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:38.740 16:32:12 -- common/autotest_common.sh@850 -- # return 0 00:20:38.740 16:32:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:38.740 16:32:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:38.740 16:32:12 -- common/autotest_common.sh@10 -- # set +x 00:20:38.740 16:32:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:38.740 16:32:12 -- target/dif.sh@139 -- # create_transport 00:20:38.740 16:32:12 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:20:38.740 16:32:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.740 16:32:12 -- common/autotest_common.sh@10 -- # set +x 00:20:38.740 [2024-04-17 16:32:12.760108] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:38.740 16:32:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.740 16:32:12 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:20:38.740 16:32:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:38.740 16:32:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:38.740 16:32:12 -- common/autotest_common.sh@10 -- # set +x 00:20:38.999 ************************************ 00:20:38.999 START TEST fio_dif_1_default 00:20:38.999 ************************************ 00:20:38.999 16:32:12 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:20:38.999 16:32:12 -- target/dif.sh@86 -- # create_subsystems 0 00:20:38.999 16:32:12 -- target/dif.sh@28 -- # local sub 00:20:38.999 16:32:12 -- target/dif.sh@30 -- # for sub in "$@" 00:20:38.999 16:32:12 -- target/dif.sh@31 -- # create_subsystem 0 00:20:38.999 16:32:12 -- target/dif.sh@18 -- # local sub_id=0 00:20:38.999 16:32:12 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:38.999 16:32:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.999 16:32:12 -- common/autotest_common.sh@10 -- # set +x 00:20:38.999 bdev_null0 00:20:38.999 16:32:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.999 16:32:12 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:38.999 16:32:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.999 16:32:12 -- common/autotest_common.sh@10 -- # set +x 00:20:38.999 16:32:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.999 16:32:12 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:38.999 16:32:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.999 16:32:12 -- common/autotest_common.sh@10 -- # set +x 00:20:38.999 16:32:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.999 16:32:12 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:38.999 16:32:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.999 16:32:12 -- common/autotest_common.sh@10 -- # set +x 00:20:38.999 [2024-04-17 16:32:12.876271] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:38.999 16:32:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.999 16:32:12 -- target/dif.sh@87 -- # fio /dev/fd/62 00:20:38.999 16:32:12 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:20:38.999 16:32:12 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:38.999 16:32:12 -- nvmf/common.sh@521 -- # config=() 00:20:38.999 16:32:12 -- nvmf/common.sh@521 -- # local subsystem config 00:20:38.999 16:32:12 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:38.999 16:32:12 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:38.999 16:32:12 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:38.999 { 00:20:38.999 "params": { 00:20:38.999 "name": "Nvme$subsystem", 00:20:38.999 "trtype": "$TEST_TRANSPORT", 00:20:38.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.999 "adrfam": "ipv4", 00:20:38.999 "trsvcid": "$NVMF_PORT", 00:20:38.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.999 "hdgst": ${hdgst:-false}, 00:20:38.999 "ddgst": ${ddgst:-false} 00:20:38.999 }, 00:20:38.999 "method": "bdev_nvme_attach_controller" 00:20:38.999 } 00:20:38.999 EOF 00:20:38.999 )") 00:20:38.999 16:32:12 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:38.999 16:32:12 -- target/dif.sh@82 -- # gen_fio_conf 00:20:38.999 16:32:12 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:20:38.999 16:32:12 -- target/dif.sh@54 -- # local file 00:20:38.999 16:32:12 -- target/dif.sh@56 -- # cat 00:20:38.999 16:32:12 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:38.999 16:32:12 -- nvmf/common.sh@543 -- # cat 00:20:38.999 16:32:12 -- common/autotest_common.sh@1325 -- # local sanitizers 00:20:38.999 16:32:12 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:38.999 16:32:12 -- common/autotest_common.sh@1327 -- # shift 00:20:38.999 16:32:12 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:20:38.999 16:32:12 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:20:38.999 16:32:12 -- target/dif.sh@72 -- # (( file = 1 )) 00:20:38.999 16:32:12 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:38.999 16:32:12 -- nvmf/common.sh@545 -- # jq . 00:20:38.999 16:32:12 -- target/dif.sh@72 -- # (( file <= files )) 00:20:38.999 16:32:12 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:20:38.999 16:32:12 -- common/autotest_common.sh@1331 -- # grep libasan 00:20:38.999 16:32:12 -- nvmf/common.sh@546 -- # IFS=, 00:20:38.999 16:32:12 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:38.999 "params": { 00:20:38.999 "name": "Nvme0", 00:20:38.999 "trtype": "tcp", 00:20:38.999 "traddr": "10.0.0.2", 00:20:38.999 "adrfam": "ipv4", 00:20:38.999 "trsvcid": "4420", 00:20:38.999 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:38.999 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:38.999 "hdgst": false, 00:20:38.999 "ddgst": false 00:20:38.999 }, 00:20:38.999 "method": "bdev_nvme_attach_controller" 00:20:38.999 }' 00:20:38.999 16:32:12 -- common/autotest_common.sh@1331 -- # asan_lib= 00:20:38.999 16:32:12 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:20:38.999 16:32:12 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:20:38.999 16:32:12 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:38.999 16:32:12 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:20:38.999 16:32:12 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:20:38.999 16:32:12 -- common/autotest_common.sh@1331 -- # asan_lib= 00:20:38.999 16:32:12 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:20:38.999 16:32:12 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:38.999 16:32:12 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:39.258 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:39.258 fio-3.35 00:20:39.258 Starting 1 thread 00:20:39.517 [2024-04-17 16:32:13.550941] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:39.517 [2024-04-17 16:32:13.551015] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:51.875 00:20:51.875 filename0: (groupid=0, jobs=1): err= 0: pid=90270: Wed Apr 17 16:32:23 2024 00:20:51.875 read: IOPS=903, BW=3613KiB/s (3700kB/s)(35.3MiB/10012msec) 00:20:51.875 slat (usec): min=6, max=148, avg= 9.24, stdev= 5.23 00:20:51.875 clat (usec): min=429, max=41681, avg=4399.20, stdev=11952.03 00:20:51.875 lat (usec): min=436, max=41694, avg=4408.44, stdev=11952.17 00:20:51.875 clat percentiles (usec): 00:20:51.875 | 1.00th=[ 457], 5.00th=[ 465], 10.00th=[ 474], 20.00th=[ 482], 00:20:51.875 | 30.00th=[ 486], 40.00th=[ 494], 50.00th=[ 498], 60.00th=[ 506], 00:20:51.875 | 70.00th=[ 519], 80.00th=[ 545], 90.00th=[ 676], 95.00th=[41157], 00:20:51.875 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:20:51.875 | 99.99th=[41681] 00:20:51.875 bw ( KiB/s): min= 2112, max= 7104, per=100.00%, avg=3616.00, stdev=1304.44, samples=20 00:20:51.875 iops : min= 528, max= 1776, avg=904.00, stdev=326.11, samples=20 00:20:51.875 lat (usec) : 500=50.90%, 750=39.19%, 1000=0.28% 00:20:51.875 lat (msec) : 2=0.04%, 50=9.60% 00:20:51.875 cpu : usr=90.66%, sys=8.20%, ctx=124, majf=0, minf=0 00:20:51.875 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:51.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:51.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:51.875 issued rwts: total=9044,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:51.875 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:51.875 00:20:51.875 Run status group 0 (all jobs): 00:20:51.875 READ: bw=3613KiB/s (3700kB/s), 3613KiB/s-3613KiB/s (3700kB/s-3700kB/s), io=35.3MiB (37.0MB), run=10012-10012msec 00:20:51.875 16:32:23 -- target/dif.sh@88 -- # destroy_subsystems 0 00:20:51.875 16:32:23 -- target/dif.sh@43 -- # local sub 00:20:51.875 16:32:23 -- target/dif.sh@45 -- # for sub in "$@" 00:20:51.875 16:32:23 -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:51.875 16:32:23 -- target/dif.sh@36 -- # local sub_id=0 00:20:51.875 16:32:23 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:51.876 16:32:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:51.876 16:32:23 -- common/autotest_common.sh@10 -- # set +x 00:20:51.876 16:32:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:51.876 16:32:23 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:51.876 16:32:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:51.876 16:32:23 -- common/autotest_common.sh@10 -- # set +x 00:20:51.876 ************************************ 00:20:51.876 END TEST fio_dif_1_default 00:20:51.876 ************************************ 00:20:51.876 16:32:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:51.876 00:20:51.876 real 0m11.071s 00:20:51.876 user 0m9.783s 00:20:51.876 sys 0m1.075s 00:20:51.876 16:32:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:51.876 16:32:23 -- common/autotest_common.sh@10 -- # set +x 00:20:51.876 16:32:23 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:20:51.876 16:32:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:51.876 16:32:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:51.876 16:32:23 -- common/autotest_common.sh@10 -- # set +x 00:20:51.876 ************************************ 00:20:51.876 START TEST fio_dif_1_multi_subsystems 00:20:51.876 ************************************ 00:20:51.876 16:32:24 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:20:51.876 16:32:24 -- target/dif.sh@92 -- # local files=1 00:20:51.876 16:32:24 -- target/dif.sh@94 -- # create_subsystems 0 1 00:20:51.876 16:32:24 -- target/dif.sh@28 -- # local sub 00:20:51.876 16:32:24 -- target/dif.sh@30 -- # for sub in "$@" 00:20:51.876 16:32:24 -- target/dif.sh@31 -- # create_subsystem 0 00:20:51.876 16:32:24 -- target/dif.sh@18 -- # local sub_id=0 00:20:51.876 16:32:24 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:51.876 16:32:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:51.876 16:32:24 -- common/autotest_common.sh@10 -- # set +x 00:20:51.876 bdev_null0 00:20:51.876 16:32:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:51.876 16:32:24 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:51.876 16:32:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:51.876 16:32:24 -- common/autotest_common.sh@10 -- # set +x 00:20:51.876 16:32:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:51.876 16:32:24 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:51.876 16:32:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:51.876 16:32:24 -- common/autotest_common.sh@10 -- # set +x 00:20:51.876 16:32:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:51.876 16:32:24 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:51.876 16:32:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:51.876 16:32:24 -- common/autotest_common.sh@10 -- # set +x 00:20:51.876 [2024-04-17 16:32:24.054039] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:51.876 16:32:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:51.876 16:32:24 -- target/dif.sh@30 -- # for sub in "$@" 00:20:51.876 16:32:24 -- target/dif.sh@31 -- # create_subsystem 1 00:20:51.876 16:32:24 -- target/dif.sh@18 -- # local sub_id=1 00:20:51.876 16:32:24 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:51.876 16:32:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:51.876 16:32:24 -- common/autotest_common.sh@10 -- # set +x 00:20:51.876 bdev_null1 00:20:51.876 16:32:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:51.876 16:32:24 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:51.876 16:32:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:51.876 16:32:24 -- common/autotest_common.sh@10 -- # set +x 00:20:51.876 16:32:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:51.876 16:32:24 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:51.876 16:32:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:51.876 16:32:24 -- common/autotest_common.sh@10 -- # set +x 00:20:51.876 16:32:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:51.876 16:32:24 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:51.876 16:32:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:51.876 16:32:24 -- common/autotest_common.sh@10 -- # set +x 00:20:51.876 16:32:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:51.876 16:32:24 -- target/dif.sh@95 -- # fio /dev/fd/62 00:20:51.876 16:32:24 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:20:51.876 16:32:24 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:51.876 16:32:24 -- nvmf/common.sh@521 -- # config=() 00:20:51.876 16:32:24 -- nvmf/common.sh@521 -- # local subsystem config 00:20:51.876 16:32:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:51.876 16:32:24 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:51.876 16:32:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:51.876 { 00:20:51.876 "params": { 00:20:51.876 "name": "Nvme$subsystem", 00:20:51.876 "trtype": "$TEST_TRANSPORT", 00:20:51.876 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.876 "adrfam": "ipv4", 00:20:51.876 "trsvcid": "$NVMF_PORT", 00:20:51.876 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.876 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.876 "hdgst": ${hdgst:-false}, 00:20:51.876 "ddgst": ${ddgst:-false} 00:20:51.876 }, 00:20:51.876 "method": "bdev_nvme_attach_controller" 00:20:51.876 } 00:20:51.876 EOF 00:20:51.876 )") 00:20:51.876 16:32:24 -- target/dif.sh@82 -- # gen_fio_conf 00:20:51.876 16:32:24 -- target/dif.sh@54 -- # local file 00:20:51.876 16:32:24 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:51.876 16:32:24 -- target/dif.sh@56 -- # cat 00:20:51.876 16:32:24 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:20:51.876 16:32:24 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:51.876 16:32:24 -- common/autotest_common.sh@1325 -- # local sanitizers 00:20:51.876 16:32:24 -- nvmf/common.sh@543 -- # cat 00:20:51.876 16:32:24 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:51.876 16:32:24 -- common/autotest_common.sh@1327 -- # shift 00:20:51.876 16:32:24 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:20:51.876 16:32:24 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:20:51.876 16:32:24 -- target/dif.sh@72 -- # (( file = 1 )) 00:20:51.876 16:32:24 -- target/dif.sh@72 -- # (( file <= files )) 00:20:51.876 16:32:24 -- target/dif.sh@73 -- # cat 00:20:51.876 16:32:24 -- common/autotest_common.sh@1331 -- # grep libasan 00:20:51.876 16:32:24 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:51.876 16:32:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:51.876 16:32:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:51.876 { 00:20:51.876 "params": { 00:20:51.876 "name": "Nvme$subsystem", 00:20:51.876 "trtype": "$TEST_TRANSPORT", 00:20:51.876 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.876 "adrfam": "ipv4", 00:20:51.876 "trsvcid": "$NVMF_PORT", 00:20:51.876 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.876 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.876 "hdgst": ${hdgst:-false}, 00:20:51.876 "ddgst": ${ddgst:-false} 00:20:51.876 }, 00:20:51.876 "method": "bdev_nvme_attach_controller" 00:20:51.876 } 00:20:51.876 EOF 00:20:51.876 )") 00:20:51.876 16:32:24 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:20:51.876 16:32:24 -- nvmf/common.sh@543 -- # cat 00:20:51.876 16:32:24 -- target/dif.sh@72 -- # (( file++ )) 00:20:51.876 16:32:24 -- target/dif.sh@72 -- # (( file <= files )) 00:20:51.876 16:32:24 -- nvmf/common.sh@545 -- # jq . 00:20:51.876 16:32:24 -- nvmf/common.sh@546 -- # IFS=, 00:20:51.876 16:32:24 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:51.876 "params": { 00:20:51.876 "name": "Nvme0", 00:20:51.876 "trtype": "tcp", 00:20:51.876 "traddr": "10.0.0.2", 00:20:51.876 "adrfam": "ipv4", 00:20:51.876 "trsvcid": "4420", 00:20:51.876 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:51.876 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:51.876 "hdgst": false, 00:20:51.876 "ddgst": false 00:20:51.876 }, 00:20:51.876 "method": "bdev_nvme_attach_controller" 00:20:51.876 },{ 00:20:51.876 "params": { 00:20:51.876 "name": "Nvme1", 00:20:51.876 "trtype": "tcp", 00:20:51.876 "traddr": "10.0.0.2", 00:20:51.876 "adrfam": "ipv4", 00:20:51.876 "trsvcid": "4420", 00:20:51.876 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:51.876 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:51.876 "hdgst": false, 00:20:51.876 "ddgst": false 00:20:51.876 }, 00:20:51.876 "method": "bdev_nvme_attach_controller" 00:20:51.876 }' 00:20:51.876 16:32:24 -- common/autotest_common.sh@1331 -- # asan_lib= 00:20:51.876 16:32:24 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:20:51.876 16:32:24 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:20:51.876 16:32:24 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:51.876 16:32:24 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:20:51.876 16:32:24 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:20:51.876 16:32:24 -- common/autotest_common.sh@1331 -- # asan_lib= 00:20:51.876 16:32:24 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:20:51.876 16:32:24 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:51.876 16:32:24 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:51.876 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:51.876 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:51.876 fio-3.35 00:20:51.876 Starting 2 threads 00:20:51.876 [2024-04-17 16:32:24.875089] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:51.876 [2024-04-17 16:32:24.875403] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:21:01.856 00:21:01.856 filename0: (groupid=0, jobs=1): err= 0: pid=90434: Wed Apr 17 16:32:35 2024 00:21:01.856 read: IOPS=259, BW=1036KiB/s (1061kB/s)(10.2MiB/10034msec) 00:21:01.856 slat (nsec): min=5538, max=65842, avg=10120.33, stdev=5619.30 00:21:01.856 clat (usec): min=454, max=42032, avg=15403.51, stdev=19546.56 00:21:01.856 lat (usec): min=461, max=42046, avg=15413.63, stdev=19546.51 00:21:01.856 clat percentiles (usec): 00:21:01.856 | 1.00th=[ 469], 5.00th=[ 478], 10.00th=[ 486], 20.00th=[ 498], 00:21:01.856 | 30.00th=[ 506], 40.00th=[ 523], 50.00th=[ 553], 60.00th=[ 865], 00:21:01.856 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:21:01.856 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:21:01.856 | 99.99th=[42206] 00:21:01.856 bw ( KiB/s): min= 640, max= 1888, per=50.82%, avg=1038.30, stdev=291.60, samples=20 00:21:01.856 iops : min= 160, max= 472, avg=259.55, stdev=72.92, samples=20 00:21:01.856 lat (usec) : 500=23.42%, 750=32.88%, 1000=7.08% 00:21:01.856 lat (msec) : 50=36.62% 00:21:01.856 cpu : usr=95.04%, sys=4.47%, ctx=5, majf=0, minf=0 00:21:01.856 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:01.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.856 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.856 issued rwts: total=2600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:01.856 latency : target=0, window=0, percentile=100.00%, depth=4 00:21:01.856 filename1: (groupid=0, jobs=1): err= 0: pid=90435: Wed Apr 17 16:32:35 2024 00:21:01.856 read: IOPS=251, BW=1007KiB/s (1031kB/s)(9.86MiB/10030msec) 00:21:01.856 slat (nsec): min=6806, max=65651, avg=10815.30, stdev=7086.46 00:21:01.856 clat (usec): min=450, max=41963, avg=15859.91, stdev=19666.53 00:21:01.856 lat (usec): min=458, max=41995, avg=15870.72, stdev=19666.25 00:21:01.856 clat percentiles (usec): 00:21:01.856 | 1.00th=[ 469], 5.00th=[ 482], 10.00th=[ 490], 20.00th=[ 502], 00:21:01.856 | 30.00th=[ 515], 40.00th=[ 537], 50.00th=[ 562], 60.00th=[ 889], 00:21:01.856 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:21:01.856 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:21:01.856 | 99.99th=[42206] 00:21:01.856 bw ( KiB/s): min= 480, max= 1440, per=49.35%, avg=1008.00, stdev=256.11, samples=20 00:21:01.856 iops : min= 120, max= 360, avg=252.00, stdev=64.03, samples=20 00:21:01.856 lat (usec) : 500=18.30%, 750=36.85%, 1000=6.97% 00:21:01.856 lat (msec) : 2=0.16%, 50=37.72% 00:21:01.856 cpu : usr=95.07%, sys=4.11%, ctx=85, majf=0, minf=0 00:21:01.856 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:01.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.856 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.856 issued rwts: total=2524,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:01.856 latency : target=0, window=0, percentile=100.00%, depth=4 00:21:01.856 00:21:01.856 Run status group 0 (all jobs): 00:21:01.856 READ: bw=2043KiB/s (2092kB/s), 1007KiB/s-1036KiB/s (1031kB/s-1061kB/s), io=20.0MiB (21.0MB), run=10030-10034msec 00:21:01.856 16:32:35 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:21:01.856 16:32:35 -- target/dif.sh@43 -- # local sub 00:21:01.856 16:32:35 -- target/dif.sh@45 -- # for sub in "$@" 00:21:01.856 16:32:35 -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:01.856 16:32:35 -- target/dif.sh@36 -- # local sub_id=0 00:21:01.856 16:32:35 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:01.856 16:32:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:01.856 16:32:35 -- common/autotest_common.sh@10 -- # set +x 00:21:01.856 16:32:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:01.856 16:32:35 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:01.856 16:32:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:01.856 16:32:35 -- common/autotest_common.sh@10 -- # set +x 00:21:01.856 16:32:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:01.856 16:32:35 -- target/dif.sh@45 -- # for sub in "$@" 00:21:01.856 16:32:35 -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:01.856 16:32:35 -- target/dif.sh@36 -- # local sub_id=1 00:21:01.856 16:32:35 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:01.856 16:32:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:01.856 16:32:35 -- common/autotest_common.sh@10 -- # set +x 00:21:01.856 16:32:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:01.856 16:32:35 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:01.856 16:32:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:01.856 16:32:35 -- common/autotest_common.sh@10 -- # set +x 00:21:01.856 ************************************ 00:21:01.856 END TEST fio_dif_1_multi_subsystems 00:21:01.856 ************************************ 00:21:01.856 16:32:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:01.856 00:21:01.856 real 0m11.293s 00:21:01.856 user 0m19.939s 00:21:01.856 sys 0m1.159s 00:21:01.856 16:32:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:01.856 16:32:35 -- common/autotest_common.sh@10 -- # set +x 00:21:01.856 16:32:35 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:21:01.856 16:32:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:01.856 16:32:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:01.856 16:32:35 -- common/autotest_common.sh@10 -- # set +x 00:21:01.856 ************************************ 00:21:01.856 START TEST fio_dif_rand_params 00:21:01.856 ************************************ 00:21:01.856 16:32:35 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:21:01.856 16:32:35 -- target/dif.sh@100 -- # local NULL_DIF 00:21:01.856 16:32:35 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:21:01.856 16:32:35 -- target/dif.sh@103 -- # NULL_DIF=3 00:21:01.856 16:32:35 -- target/dif.sh@103 -- # bs=128k 00:21:01.856 16:32:35 -- target/dif.sh@103 -- # numjobs=3 00:21:01.856 16:32:35 -- target/dif.sh@103 -- # iodepth=3 00:21:01.856 16:32:35 -- target/dif.sh@103 -- # runtime=5 00:21:01.856 16:32:35 -- target/dif.sh@105 -- # create_subsystems 0 00:21:01.856 16:32:35 -- target/dif.sh@28 -- # local sub 00:21:01.856 16:32:35 -- target/dif.sh@30 -- # for sub in "$@" 00:21:01.856 16:32:35 -- target/dif.sh@31 -- # create_subsystem 0 00:21:01.856 16:32:35 -- target/dif.sh@18 -- # local sub_id=0 00:21:01.856 16:32:35 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:21:01.857 16:32:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:01.857 16:32:35 -- common/autotest_common.sh@10 -- # set +x 00:21:01.857 bdev_null0 00:21:01.857 16:32:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:01.857 16:32:35 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:01.857 16:32:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:01.857 16:32:35 -- common/autotest_common.sh@10 -- # set +x 00:21:01.857 16:32:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:01.857 16:32:35 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:01.857 16:32:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:01.857 16:32:35 -- common/autotest_common.sh@10 -- # set +x 00:21:01.857 16:32:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:01.857 16:32:35 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:01.857 16:32:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:01.857 16:32:35 -- common/autotest_common.sh@10 -- # set +x 00:21:01.857 [2024-04-17 16:32:35.463623] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:01.857 16:32:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:01.857 16:32:35 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:21:01.857 16:32:35 -- target/dif.sh@106 -- # fio /dev/fd/62 00:21:01.857 16:32:35 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:01.857 16:32:35 -- nvmf/common.sh@521 -- # config=() 00:21:01.857 16:32:35 -- nvmf/common.sh@521 -- # local subsystem config 00:21:01.857 16:32:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:01.857 16:32:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:01.857 { 00:21:01.857 "params": { 00:21:01.857 "name": "Nvme$subsystem", 00:21:01.857 "trtype": "$TEST_TRANSPORT", 00:21:01.857 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.857 "adrfam": "ipv4", 00:21:01.857 "trsvcid": "$NVMF_PORT", 00:21:01.857 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.857 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.857 "hdgst": ${hdgst:-false}, 00:21:01.857 "ddgst": ${ddgst:-false} 00:21:01.857 }, 00:21:01.857 "method": "bdev_nvme_attach_controller" 00:21:01.857 } 00:21:01.857 EOF 00:21:01.857 )") 00:21:01.857 16:32:35 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:01.857 16:32:35 -- target/dif.sh@82 -- # gen_fio_conf 00:21:01.857 16:32:35 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:01.857 16:32:35 -- target/dif.sh@54 -- # local file 00:21:01.857 16:32:35 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:21:01.857 16:32:35 -- target/dif.sh@56 -- # cat 00:21:01.857 16:32:35 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:01.857 16:32:35 -- common/autotest_common.sh@1325 -- # local sanitizers 00:21:01.857 16:32:35 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:01.857 16:32:35 -- nvmf/common.sh@543 -- # cat 00:21:01.857 16:32:35 -- common/autotest_common.sh@1327 -- # shift 00:21:01.857 16:32:35 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:21:01.857 16:32:35 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:21:01.857 16:32:35 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:01.857 16:32:35 -- target/dif.sh@72 -- # (( file = 1 )) 00:21:01.857 16:32:35 -- target/dif.sh@72 -- # (( file <= files )) 00:21:01.857 16:32:35 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:21:01.857 16:32:35 -- nvmf/common.sh@545 -- # jq . 00:21:01.857 16:32:35 -- common/autotest_common.sh@1331 -- # grep libasan 00:21:01.857 16:32:35 -- nvmf/common.sh@546 -- # IFS=, 00:21:01.857 16:32:35 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:01.857 "params": { 00:21:01.857 "name": "Nvme0", 00:21:01.857 "trtype": "tcp", 00:21:01.857 "traddr": "10.0.0.2", 00:21:01.857 "adrfam": "ipv4", 00:21:01.857 "trsvcid": "4420", 00:21:01.857 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:01.857 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:01.857 "hdgst": false, 00:21:01.857 "ddgst": false 00:21:01.857 }, 00:21:01.857 "method": "bdev_nvme_attach_controller" 00:21:01.857 }' 00:21:01.857 16:32:35 -- common/autotest_common.sh@1331 -- # asan_lib= 00:21:01.857 16:32:35 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:21:01.857 16:32:35 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:21:01.857 16:32:35 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:01.857 16:32:35 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:21:01.857 16:32:35 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:21:01.857 16:32:35 -- common/autotest_common.sh@1331 -- # asan_lib= 00:21:01.857 16:32:35 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:21:01.857 16:32:35 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:01.857 16:32:35 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:01.857 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:21:01.857 ... 00:21:01.857 fio-3.35 00:21:01.857 Starting 3 threads 00:21:02.116 [2024-04-17 16:32:36.116619] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:21:02.116 [2024-04-17 16:32:36.116711] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:21:07.383 00:21:07.383 filename0: (groupid=0, jobs=1): err= 0: pid=90595: Wed Apr 17 16:32:41 2024 00:21:07.383 read: IOPS=275, BW=34.5MiB/s (36.1MB/s)(172MiB/5002msec) 00:21:07.383 slat (usec): min=6, max=243, avg=14.64, stdev= 8.59 00:21:07.383 clat (usec): min=5845, max=52858, avg=10859.40, stdev=2226.14 00:21:07.383 lat (usec): min=5856, max=52896, avg=10874.04, stdev=2226.78 00:21:07.383 clat percentiles (usec): 00:21:07.383 | 1.00th=[ 6718], 5.00th=[ 8717], 10.00th=[ 9503], 20.00th=[10028], 00:21:07.383 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10945], 60.00th=[11207], 00:21:07.383 | 70.00th=[11469], 80.00th=[11600], 90.00th=[11863], 95.00th=[12125], 00:21:07.383 | 99.00th=[12649], 99.50th=[12911], 99.90th=[52691], 99.95th=[52691], 00:21:07.383 | 99.99th=[52691] 00:21:07.383 bw ( KiB/s): min=32000, max=37632, per=38.55%, avg=35377.22, stdev=1602.19, samples=9 00:21:07.384 iops : min= 250, max= 294, avg=276.33, stdev=12.55, samples=9 00:21:07.384 lat (msec) : 10=17.91%, 20=81.87%, 100=0.22% 00:21:07.384 cpu : usr=91.72%, sys=6.66%, ctx=7, majf=0, minf=0 00:21:07.384 IO depths : 1=1.2%, 2=98.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:07.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.384 issued rwts: total=1379,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:07.384 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:07.384 filename0: (groupid=0, jobs=1): err= 0: pid=90596: Wed Apr 17 16:32:41 2024 00:21:07.384 read: IOPS=238, BW=29.8MiB/s (31.2MB/s)(149MiB/5002msec) 00:21:07.384 slat (usec): min=7, max=160, avg=13.51, stdev= 7.33 00:21:07.384 clat (usec): min=6465, max=56894, avg=12561.36, stdev=4292.81 00:21:07.384 lat (usec): min=6475, max=56906, avg=12574.86, stdev=4292.59 00:21:07.384 clat percentiles (usec): 00:21:07.384 | 1.00th=[ 7963], 5.00th=[10683], 10.00th=[10945], 20.00th=[11600], 00:21:07.384 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12256], 60.00th=[12387], 00:21:07.384 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13304], 95.00th=[13698], 00:21:07.384 | 99.00th=[51643], 99.50th=[53216], 99.90th=[56886], 99.95th=[56886], 00:21:07.384 | 99.99th=[56886] 00:21:07.384 bw ( KiB/s): min=25856, max=33024, per=33.13%, avg=30407.11, stdev=2193.91, samples=9 00:21:07.384 iops : min= 202, max= 258, avg=237.56, stdev=17.14, samples=9 00:21:07.384 lat (msec) : 10=1.76%, 20=97.23%, 100=1.01% 00:21:07.384 cpu : usr=92.12%, sys=6.16%, ctx=39, majf=0, minf=0 00:21:07.384 IO depths : 1=10.7%, 2=89.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:07.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.384 issued rwts: total=1192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:07.384 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:07.384 filename0: (groupid=0, jobs=1): err= 0: pid=90597: Wed Apr 17 16:32:41 2024 00:21:07.384 read: IOPS=203, BW=25.4MiB/s (26.6MB/s)(127MiB/5004msec) 00:21:07.384 slat (nsec): min=7651, max=57497, avg=11961.13, stdev=6198.58 00:21:07.384 clat (usec): min=4488, max=25815, avg=14728.66, stdev=1680.95 00:21:07.384 lat (usec): min=4512, max=25832, avg=14740.62, stdev=1681.10 00:21:07.384 clat percentiles (usec): 00:21:07.384 | 1.00th=[ 9241], 5.00th=[10290], 10.00th=[13698], 20.00th=[14222], 00:21:07.384 | 30.00th=[14484], 40.00th=[14746], 50.00th=[15008], 60.00th=[15139], 00:21:07.384 | 70.00th=[15401], 80.00th=[15664], 90.00th=[16188], 95.00th=[16450], 00:21:07.384 | 99.00th=[16909], 99.50th=[16909], 99.90th=[25822], 99.95th=[25822], 00:21:07.384 | 99.99th=[25822] 00:21:07.384 bw ( KiB/s): min=25344, max=27648, per=28.36%, avg=26026.67, stdev=809.54, samples=9 00:21:07.384 iops : min= 198, max= 216, avg=203.33, stdev= 6.32, samples=9 00:21:07.384 lat (msec) : 10=3.83%, 20=95.87%, 50=0.29% 00:21:07.384 cpu : usr=92.64%, sys=5.74%, ctx=36, majf=0, minf=9 00:21:07.384 IO depths : 1=32.7%, 2=67.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:07.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.384 issued rwts: total=1017,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:07.384 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:07.384 00:21:07.384 Run status group 0 (all jobs): 00:21:07.384 READ: bw=89.6MiB/s (94.0MB/s), 25.4MiB/s-34.5MiB/s (26.6MB/s-36.1MB/s), io=449MiB (470MB), run=5002-5004msec 00:21:07.644 16:32:41 -- target/dif.sh@107 -- # destroy_subsystems 0 00:21:07.644 16:32:41 -- target/dif.sh@43 -- # local sub 00:21:07.644 16:32:41 -- target/dif.sh@45 -- # for sub in "$@" 00:21:07.644 16:32:41 -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:07.644 16:32:41 -- target/dif.sh@36 -- # local sub_id=0 00:21:07.644 16:32:41 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:07.644 16:32:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.644 16:32:41 -- common/autotest_common.sh@10 -- # set +x 00:21:07.644 16:32:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:07.644 16:32:41 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:07.644 16:32:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.644 16:32:41 -- common/autotest_common.sh@10 -- # set +x 00:21:07.644 16:32:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:07.644 16:32:41 -- target/dif.sh@109 -- # NULL_DIF=2 00:21:07.644 16:32:41 -- target/dif.sh@109 -- # bs=4k 00:21:07.644 16:32:41 -- target/dif.sh@109 -- # numjobs=8 00:21:07.644 16:32:41 -- target/dif.sh@109 -- # iodepth=16 00:21:07.644 16:32:41 -- target/dif.sh@109 -- # runtime= 00:21:07.644 16:32:41 -- target/dif.sh@109 -- # files=2 00:21:07.644 16:32:41 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:21:07.644 16:32:41 -- target/dif.sh@28 -- # local sub 00:21:07.644 16:32:41 -- target/dif.sh@30 -- # for sub in "$@" 00:21:07.644 16:32:41 -- target/dif.sh@31 -- # create_subsystem 0 00:21:07.644 16:32:41 -- target/dif.sh@18 -- # local sub_id=0 00:21:07.644 16:32:41 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:21:07.644 16:32:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.644 16:32:41 -- common/autotest_common.sh@10 -- # set +x 00:21:07.644 bdev_null0 00:21:07.644 16:32:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:07.644 16:32:41 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:07.644 16:32:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.644 16:32:41 -- common/autotest_common.sh@10 -- # set +x 00:21:07.644 16:32:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:07.644 16:32:41 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:07.644 16:32:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.644 16:32:41 -- common/autotest_common.sh@10 -- # set +x 00:21:07.644 16:32:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:07.644 16:32:41 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:07.644 16:32:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.644 16:32:41 -- common/autotest_common.sh@10 -- # set +x 00:21:07.644 [2024-04-17 16:32:41.529031] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:07.644 16:32:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:07.644 16:32:41 -- target/dif.sh@30 -- # for sub in "$@" 00:21:07.644 16:32:41 -- target/dif.sh@31 -- # create_subsystem 1 00:21:07.644 16:32:41 -- target/dif.sh@18 -- # local sub_id=1 00:21:07.644 16:32:41 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:21:07.644 16:32:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.644 16:32:41 -- common/autotest_common.sh@10 -- # set +x 00:21:07.644 bdev_null1 00:21:07.644 16:32:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:07.644 16:32:41 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:21:07.644 16:32:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.644 16:32:41 -- common/autotest_common.sh@10 -- # set +x 00:21:07.644 16:32:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:07.644 16:32:41 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:21:07.644 16:32:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.644 16:32:41 -- common/autotest_common.sh@10 -- # set +x 00:21:07.644 16:32:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:07.644 16:32:41 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:07.644 16:32:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.644 16:32:41 -- common/autotest_common.sh@10 -- # set +x 00:21:07.644 16:32:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:07.644 16:32:41 -- target/dif.sh@30 -- # for sub in "$@" 00:21:07.644 16:32:41 -- target/dif.sh@31 -- # create_subsystem 2 00:21:07.644 16:32:41 -- target/dif.sh@18 -- # local sub_id=2 00:21:07.644 16:32:41 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:21:07.644 16:32:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.644 16:32:41 -- common/autotest_common.sh@10 -- # set +x 00:21:07.644 bdev_null2 00:21:07.644 16:32:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:07.644 16:32:41 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:21:07.644 16:32:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.644 16:32:41 -- common/autotest_common.sh@10 -- # set +x 00:21:07.644 16:32:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:07.644 16:32:41 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:21:07.644 16:32:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.644 16:32:41 -- common/autotest_common.sh@10 -- # set +x 00:21:07.644 16:32:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:07.644 16:32:41 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:07.644 16:32:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.644 16:32:41 -- common/autotest_common.sh@10 -- # set +x 00:21:07.644 16:32:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:07.644 16:32:41 -- target/dif.sh@112 -- # fio /dev/fd/62 00:21:07.644 16:32:41 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:21:07.644 16:32:41 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:21:07.644 16:32:41 -- nvmf/common.sh@521 -- # config=() 00:21:07.644 16:32:41 -- nvmf/common.sh@521 -- # local subsystem config 00:21:07.644 16:32:41 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:07.644 16:32:41 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:07.644 { 00:21:07.644 "params": { 00:21:07.644 "name": "Nvme$subsystem", 00:21:07.644 "trtype": "$TEST_TRANSPORT", 00:21:07.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.644 "adrfam": "ipv4", 00:21:07.645 "trsvcid": "$NVMF_PORT", 00:21:07.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.645 "hdgst": ${hdgst:-false}, 00:21:07.645 "ddgst": ${ddgst:-false} 00:21:07.645 }, 00:21:07.645 "method": "bdev_nvme_attach_controller" 00:21:07.645 } 00:21:07.645 EOF 00:21:07.645 )") 00:21:07.645 16:32:41 -- target/dif.sh@82 -- # gen_fio_conf 00:21:07.645 16:32:41 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:07.645 16:32:41 -- target/dif.sh@54 -- # local file 00:21:07.645 16:32:41 -- target/dif.sh@56 -- # cat 00:21:07.645 16:32:41 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:07.645 16:32:41 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:21:07.645 16:32:41 -- nvmf/common.sh@543 -- # cat 00:21:07.645 16:32:41 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:07.645 16:32:41 -- common/autotest_common.sh@1325 -- # local sanitizers 00:21:07.645 16:32:41 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:07.645 16:32:41 -- common/autotest_common.sh@1327 -- # shift 00:21:07.645 16:32:41 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:21:07.645 16:32:41 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:21:07.645 16:32:41 -- target/dif.sh@72 -- # (( file = 1 )) 00:21:07.645 16:32:41 -- target/dif.sh@72 -- # (( file <= files )) 00:21:07.645 16:32:41 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:07.645 16:32:41 -- target/dif.sh@73 -- # cat 00:21:07.645 16:32:41 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:07.645 16:32:41 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:21:07.645 16:32:41 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:07.645 { 00:21:07.645 "params": { 00:21:07.645 "name": "Nvme$subsystem", 00:21:07.645 "trtype": "$TEST_TRANSPORT", 00:21:07.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.645 "adrfam": "ipv4", 00:21:07.645 "trsvcid": "$NVMF_PORT", 00:21:07.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.645 "hdgst": ${hdgst:-false}, 00:21:07.645 "ddgst": ${ddgst:-false} 00:21:07.645 }, 00:21:07.645 "method": "bdev_nvme_attach_controller" 00:21:07.645 } 00:21:07.645 EOF 00:21:07.645 )") 00:21:07.645 16:32:41 -- common/autotest_common.sh@1331 -- # grep libasan 00:21:07.645 16:32:41 -- target/dif.sh@72 -- # (( file++ )) 00:21:07.645 16:32:41 -- target/dif.sh@72 -- # (( file <= files )) 00:21:07.645 16:32:41 -- nvmf/common.sh@543 -- # cat 00:21:07.645 16:32:41 -- target/dif.sh@73 -- # cat 00:21:07.645 16:32:41 -- target/dif.sh@72 -- # (( file++ )) 00:21:07.645 16:32:41 -- target/dif.sh@72 -- # (( file <= files )) 00:21:07.645 16:32:41 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:07.645 16:32:41 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:07.645 { 00:21:07.645 "params": { 00:21:07.645 "name": "Nvme$subsystem", 00:21:07.645 "trtype": "$TEST_TRANSPORT", 00:21:07.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.645 "adrfam": "ipv4", 00:21:07.645 "trsvcid": "$NVMF_PORT", 00:21:07.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.645 "hdgst": ${hdgst:-false}, 00:21:07.645 "ddgst": ${ddgst:-false} 00:21:07.645 }, 00:21:07.645 "method": "bdev_nvme_attach_controller" 00:21:07.645 } 00:21:07.645 EOF 00:21:07.645 )") 00:21:07.645 16:32:41 -- nvmf/common.sh@543 -- # cat 00:21:07.645 16:32:41 -- nvmf/common.sh@545 -- # jq . 00:21:07.645 16:32:41 -- nvmf/common.sh@546 -- # IFS=, 00:21:07.645 16:32:41 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:07.645 "params": { 00:21:07.645 "name": "Nvme0", 00:21:07.645 "trtype": "tcp", 00:21:07.645 "traddr": "10.0.0.2", 00:21:07.645 "adrfam": "ipv4", 00:21:07.645 "trsvcid": "4420", 00:21:07.645 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:07.645 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:07.645 "hdgst": false, 00:21:07.645 "ddgst": false 00:21:07.645 }, 00:21:07.645 "method": "bdev_nvme_attach_controller" 00:21:07.645 },{ 00:21:07.645 "params": { 00:21:07.645 "name": "Nvme1", 00:21:07.645 "trtype": "tcp", 00:21:07.645 "traddr": "10.0.0.2", 00:21:07.645 "adrfam": "ipv4", 00:21:07.645 "trsvcid": "4420", 00:21:07.645 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.645 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:07.645 "hdgst": false, 00:21:07.645 "ddgst": false 00:21:07.645 }, 00:21:07.645 "method": "bdev_nvme_attach_controller" 00:21:07.645 },{ 00:21:07.645 "params": { 00:21:07.645 "name": "Nvme2", 00:21:07.645 "trtype": "tcp", 00:21:07.645 "traddr": "10.0.0.2", 00:21:07.645 "adrfam": "ipv4", 00:21:07.645 "trsvcid": "4420", 00:21:07.645 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:07.645 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:07.645 "hdgst": false, 00:21:07.645 "ddgst": false 00:21:07.645 }, 00:21:07.645 "method": "bdev_nvme_attach_controller" 00:21:07.645 }' 00:21:07.645 16:32:41 -- common/autotest_common.sh@1331 -- # asan_lib= 00:21:07.645 16:32:41 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:21:07.645 16:32:41 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:21:07.645 16:32:41 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:07.645 16:32:41 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:21:07.645 16:32:41 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:21:07.904 16:32:41 -- common/autotest_common.sh@1331 -- # asan_lib= 00:21:07.904 16:32:41 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:21:07.904 16:32:41 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:07.904 16:32:41 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:07.904 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:07.904 ... 00:21:07.904 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:07.904 ... 00:21:07.904 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:07.904 ... 00:21:07.904 fio-3.35 00:21:07.904 Starting 24 threads 00:21:08.838 [2024-04-17 16:32:42.519754] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:21:08.838 [2024-04-17 16:32:42.519868] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:21:18.807 00:21:18.807 filename0: (groupid=0, jobs=1): err= 0: pid=90692: Wed Apr 17 16:32:52 2024 00:21:18.807 read: IOPS=174, BW=700KiB/s (716kB/s)(7020KiB/10034msec) 00:21:18.807 slat (usec): min=4, max=8032, avg=24.66, stdev=331.23 00:21:18.807 clat (msec): min=36, max=193, avg=91.22, stdev=25.46 00:21:18.807 lat (msec): min=36, max=193, avg=91.25, stdev=25.46 00:21:18.807 clat percentiles (msec): 00:21:18.807 | 1.00th=[ 48], 5.00th=[ 58], 10.00th=[ 70], 20.00th=[ 72], 00:21:18.807 | 30.00th=[ 75], 40.00th=[ 83], 50.00th=[ 85], 60.00th=[ 94], 00:21:18.807 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 131], 95.00th=[ 144], 00:21:18.807 | 99.00th=[ 180], 99.50th=[ 180], 99.90th=[ 194], 99.95th=[ 194], 00:21:18.807 | 99.99th=[ 194] 00:21:18.807 bw ( KiB/s): min= 512, max= 784, per=3.57%, avg=692.42, stdev=72.09, samples=19 00:21:18.807 iops : min= 128, max= 196, avg=173.11, stdev=18.02, samples=19 00:21:18.807 lat (msec) : 50=2.79%, 100=71.28%, 250=25.93% 00:21:18.807 cpu : usr=33.15%, sys=0.88%, ctx=899, majf=0, minf=9 00:21:18.807 IO depths : 1=3.4%, 2=7.5%, 4=18.3%, 8=61.7%, 16=9.1%, 32=0.0%, >=64=0.0% 00:21:18.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.807 complete : 0=0.0%, 4=92.3%, 8=2.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.807 issued rwts: total=1755,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:18.807 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:18.807 filename0: (groupid=0, jobs=1): err= 0: pid=90693: Wed Apr 17 16:32:52 2024 00:21:18.807 read: IOPS=192, BW=770KiB/s (789kB/s)(7708KiB/10004msec) 00:21:18.807 slat (usec): min=5, max=4025, avg=13.84, stdev=91.55 00:21:18.807 clat (msec): min=34, max=179, avg=82.93, stdev=23.55 00:21:18.807 lat (msec): min=34, max=179, avg=82.94, stdev=23.55 00:21:18.807 clat percentiles (msec): 00:21:18.807 | 1.00th=[ 40], 5.00th=[ 47], 10.00th=[ 57], 20.00th=[ 61], 00:21:18.807 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 82], 60.00th=[ 85], 00:21:18.807 | 70.00th=[ 96], 80.00th=[ 107], 90.00th=[ 112], 95.00th=[ 122], 00:21:18.807 | 99.00th=[ 146], 99.50th=[ 159], 99.90th=[ 180], 99.95th=[ 180], 00:21:18.807 | 99.99th=[ 180] 00:21:18.807 bw ( KiB/s): min= 512, max= 992, per=3.94%, avg=764.26, stdev=113.21, samples=19 00:21:18.807 iops : min= 128, max= 248, avg=191.05, stdev=28.32, samples=19 00:21:18.807 lat (msec) : 50=7.47%, 100=69.17%, 250=23.35% 00:21:18.807 cpu : usr=33.57%, sys=0.85%, ctx=890, majf=0, minf=9 00:21:18.807 IO depths : 1=1.4%, 2=3.2%, 4=10.0%, 8=72.9%, 16=12.6%, 32=0.0%, >=64=0.0% 00:21:18.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.807 complete : 0=0.0%, 4=90.2%, 8=5.6%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.807 issued rwts: total=1927,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:18.807 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:18.807 filename0: (groupid=0, jobs=1): err= 0: pid=90694: Wed Apr 17 16:32:52 2024 00:21:18.807 read: IOPS=219, BW=878KiB/s (899kB/s)(8816KiB/10044msec) 00:21:18.807 slat (usec): min=5, max=4017, avg=13.43, stdev=85.49 00:21:18.807 clat (msec): min=35, max=144, avg=72.83, stdev=22.35 00:21:18.807 lat (msec): min=35, max=144, avg=72.85, stdev=22.35 00:21:18.807 clat percentiles (msec): 00:21:18.807 | 1.00th=[ 40], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 53], 00:21:18.807 | 30.00th=[ 59], 40.00th=[ 63], 50.00th=[ 72], 60.00th=[ 73], 00:21:18.807 | 70.00th=[ 82], 80.00th=[ 88], 90.00th=[ 108], 95.00th=[ 121], 00:21:18.807 | 99.00th=[ 132], 99.50th=[ 134], 99.90th=[ 144], 99.95th=[ 144], 00:21:18.807 | 99.99th=[ 144] 00:21:18.807 bw ( KiB/s): min= 688, max= 1117, per=4.51%, avg=875.05, stdev=122.18, samples=20 00:21:18.807 iops : min= 172, max= 279, avg=218.75, stdev=30.52, samples=20 00:21:18.807 lat (msec) : 50=16.88%, 100=70.10%, 250=13.02% 00:21:18.807 cpu : usr=36.86%, sys=1.02%, ctx=1098, majf=0, minf=9 00:21:18.807 IO depths : 1=1.1%, 2=2.5%, 4=9.4%, 8=74.3%, 16=12.8%, 32=0.0%, >=64=0.0% 00:21:18.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.807 complete : 0=0.0%, 4=90.0%, 8=5.8%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.807 issued rwts: total=2204,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:18.807 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:18.807 filename0: (groupid=0, jobs=1): err= 0: pid=90695: Wed Apr 17 16:32:52 2024 00:21:18.807 read: IOPS=183, BW=734KiB/s (751kB/s)(7356KiB/10024msec) 00:21:18.807 slat (usec): min=3, max=2031, avg=12.77, stdev=47.70 00:21:18.807 clat (msec): min=35, max=189, avg=87.07, stdev=24.48 00:21:18.807 lat (msec): min=35, max=189, avg=87.08, stdev=24.48 00:21:18.807 clat percentiles (msec): 00:21:18.807 | 1.00th=[ 45], 5.00th=[ 52], 10.00th=[ 58], 20.00th=[ 69], 00:21:18.807 | 30.00th=[ 73], 40.00th=[ 80], 50.00th=[ 84], 60.00th=[ 89], 00:21:18.807 | 70.00th=[ 99], 80.00th=[ 107], 90.00th=[ 113], 95.00th=[ 138], 00:21:18.807 | 99.00th=[ 167], 99.50th=[ 167], 99.90th=[ 190], 99.95th=[ 190], 00:21:18.807 | 99.99th=[ 190] 00:21:18.807 bw ( KiB/s): min= 512, max= 976, per=3.73%, avg=723.37, stdev=106.51, samples=19 00:21:18.807 iops : min= 128, max= 244, avg=180.84, stdev=26.63, samples=19 00:21:18.807 lat (msec) : 50=4.40%, 100=68.52%, 250=27.08% 00:21:18.807 cpu : usr=41.05%, sys=0.97%, ctx=1439, majf=0, minf=9 00:21:18.807 IO depths : 1=2.2%, 2=4.6%, 4=12.9%, 8=69.3%, 16=10.9%, 32=0.0%, >=64=0.0% 00:21:18.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.807 complete : 0=0.0%, 4=90.9%, 8=3.9%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.807 issued rwts: total=1839,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:18.807 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:18.807 filename0: (groupid=0, jobs=1): err= 0: pid=90696: Wed Apr 17 16:32:52 2024 00:21:18.807 read: IOPS=228, BW=914KiB/s (936kB/s)(9192KiB/10060msec) 00:21:18.807 slat (usec): min=5, max=8034, avg=18.92, stdev=205.30 00:21:18.807 clat (msec): min=9, max=178, avg=69.80, stdev=22.80 00:21:18.807 lat (msec): min=9, max=178, avg=69.82, stdev=22.79 00:21:18.807 clat percentiles (msec): 00:21:18.807 | 1.00th=[ 14], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 52], 00:21:18.807 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 67], 60.00th=[ 72], 00:21:18.807 | 70.00th=[ 79], 80.00th=[ 85], 90.00th=[ 100], 95.00th=[ 111], 00:21:18.807 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 178], 99.95th=[ 178], 00:21:18.807 | 99.99th=[ 178] 00:21:18.807 bw ( KiB/s): min= 728, max= 1272, per=4.71%, avg=914.70, stdev=147.99, samples=20 00:21:18.807 iops : min= 182, max= 318, avg=228.65, stdev=37.02, samples=20 00:21:18.807 lat (msec) : 10=0.70%, 20=0.70%, 50=15.40%, 100=74.06%, 250=9.14% 00:21:18.807 cpu : usr=40.63%, sys=1.24%, ctx=1154, majf=0, minf=9 00:21:18.807 IO depths : 1=1.2%, 2=2.5%, 4=10.1%, 8=74.1%, 16=12.1%, 32=0.0%, >=64=0.0% 00:21:18.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.807 complete : 0=0.0%, 4=89.8%, 8=5.4%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.807 issued rwts: total=2298,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:18.807 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:18.807 filename0: (groupid=0, jobs=1): err= 0: pid=90697: Wed Apr 17 16:32:52 2024 00:21:18.807 read: IOPS=171, BW=687KiB/s (703kB/s)(6876KiB/10011msec) 00:21:18.807 slat (usec): min=4, max=8031, avg=22.77, stdev=289.95 00:21:18.807 clat (msec): min=33, max=204, avg=92.96, stdev=24.92 00:21:18.807 lat (msec): min=33, max=204, avg=92.99, stdev=24.92 00:21:18.807 clat percentiles (msec): 00:21:18.807 | 1.00th=[ 51], 5.00th=[ 58], 10.00th=[ 69], 20.00th=[ 74], 00:21:18.807 | 30.00th=[ 80], 40.00th=[ 84], 50.00th=[ 85], 60.00th=[ 96], 00:21:18.807 | 70.00th=[ 105], 80.00th=[ 112], 90.00th=[ 131], 95.00th=[ 134], 00:21:18.807 | 99.00th=[ 178], 99.50th=[ 180], 99.90th=[ 205], 99.95th=[ 205], 00:21:18.807 | 99.99th=[ 205] 00:21:18.807 bw ( KiB/s): min= 512, max= 808, per=3.51%, avg=680.37, stdev=80.38, samples=19 00:21:18.807 iops : min= 128, max= 202, avg=170.05, stdev=20.10, samples=19 00:21:18.807 lat (msec) : 50=0.93%, 100=66.38%, 250=32.69% 00:21:18.807 cpu : usr=34.15%, sys=0.97%, ctx=1210, majf=0, minf=9 00:21:18.807 IO depths : 1=2.9%, 2=6.3%, 4=17.2%, 8=63.9%, 16=9.6%, 32=0.0%, >=64=0.0% 00:21:18.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.807 complete : 0=0.0%, 4=91.7%, 8=2.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.807 issued rwts: total=1719,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:18.807 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:18.807 filename0: (groupid=0, jobs=1): err= 0: pid=90698: Wed Apr 17 16:32:52 2024 00:21:18.807 read: IOPS=183, BW=734KiB/s (752kB/s)(7352KiB/10013msec) 00:21:18.807 slat (usec): min=4, max=4022, avg=14.26, stdev=96.64 00:21:18.807 clat (msec): min=35, max=150, avg=87.02, stdev=21.57 00:21:18.807 lat (msec): min=35, max=150, avg=87.03, stdev=21.57 00:21:18.807 clat percentiles (msec): 00:21:18.807 | 1.00th=[ 46], 5.00th=[ 56], 10.00th=[ 62], 20.00th=[ 71], 00:21:18.807 | 30.00th=[ 75], 40.00th=[ 79], 50.00th=[ 84], 60.00th=[ 86], 00:21:18.807 | 70.00th=[ 96], 80.00th=[ 107], 90.00th=[ 117], 95.00th=[ 132], 00:21:18.807 | 99.00th=[ 144], 99.50th=[ 146], 99.90th=[ 150], 99.95th=[ 150], 00:21:18.807 | 99.99th=[ 150] 00:21:18.807 bw ( KiB/s): min= 592, max= 864, per=3.74%, avg=726.47, stdev=86.25, samples=19 00:21:18.807 iops : min= 148, max= 216, avg=181.58, stdev=21.55, samples=19 00:21:18.807 lat (msec) : 50=2.72%, 100=70.24%, 250=27.04% 00:21:18.807 cpu : usr=39.47%, sys=1.25%, ctx=1129, majf=0, minf=9 00:21:18.808 IO depths : 1=2.6%, 2=6.0%, 4=15.9%, 8=65.1%, 16=10.3%, 32=0.0%, >=64=0.0% 00:21:18.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.808 complete : 0=0.0%, 4=91.8%, 8=3.0%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.808 issued rwts: total=1838,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:18.808 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:18.808 filename0: (groupid=0, jobs=1): err= 0: pid=90699: Wed Apr 17 16:32:52 2024 00:21:18.808 read: IOPS=183, BW=733KiB/s (750kB/s)(7332KiB/10007msec) 00:21:18.808 slat (usec): min=3, max=8179, avg=21.49, stdev=239.18 00:21:18.808 clat (msec): min=24, max=159, avg=87.19, stdev=24.30 00:21:18.808 lat (msec): min=24, max=159, avg=87.21, stdev=24.30 00:21:18.808 clat percentiles (msec): 00:21:18.808 | 1.00th=[ 37], 5.00th=[ 51], 10.00th=[ 57], 20.00th=[ 72], 00:21:18.808 | 30.00th=[ 74], 40.00th=[ 80], 50.00th=[ 84], 60.00th=[ 88], 00:21:18.808 | 70.00th=[ 99], 80.00th=[ 107], 90.00th=[ 121], 95.00th=[ 136], 00:21:18.808 | 99.00th=[ 157], 99.50th=[ 157], 99.90th=[ 161], 99.95th=[ 161], 00:21:18.808 | 99.99th=[ 161] 00:21:18.808 bw ( KiB/s): min= 512, max= 1024, per=3.73%, avg=724.63, stdev=125.54, samples=19 00:21:18.808 iops : min= 128, max= 256, avg=181.16, stdev=31.39, samples=19 00:21:18.808 lat (msec) : 50=5.24%, 100=69.12%, 250=25.64% 00:21:18.808 cpu : usr=38.86%, sys=1.12%, ctx=1108, majf=0, minf=9 00:21:18.808 IO depths : 1=2.6%, 2=5.9%, 4=15.7%, 8=65.4%, 16=10.4%, 32=0.0%, >=64=0.0% 00:21:18.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.808 complete : 0=0.0%, 4=91.7%, 8=3.1%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.808 issued rwts: total=1833,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:18.808 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:18.808 filename1: (groupid=0, jobs=1): err= 0: pid=90700: Wed Apr 17 16:32:52 2024 00:21:18.808 read: IOPS=210, BW=840KiB/s (861kB/s)(8440KiB/10042msec) 00:21:18.808 slat (usec): min=4, max=8020, avg=17.13, stdev=195.61 00:21:18.808 clat (msec): min=31, max=151, avg=76.00, stdev=22.07 00:21:18.808 lat (msec): min=31, max=151, avg=76.02, stdev=22.06 00:21:18.808 clat percentiles (msec): 00:21:18.808 | 1.00th=[ 43], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 56], 00:21:18.808 | 30.00th=[ 62], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 78], 00:21:18.808 | 70.00th=[ 85], 80.00th=[ 94], 90.00th=[ 108], 95.00th=[ 115], 00:21:18.808 | 99.00th=[ 144], 99.50th=[ 146], 99.90th=[ 153], 99.95th=[ 153], 00:21:18.808 | 99.99th=[ 153] 00:21:18.808 bw ( KiB/s): min= 640, max= 1024, per=4.31%, avg=837.60, stdev=136.53, samples=20 00:21:18.808 iops : min= 160, max= 256, avg=209.40, stdev=34.13, samples=20 00:21:18.808 lat (msec) : 50=11.90%, 100=72.13%, 250=15.97% 00:21:18.808 cpu : usr=41.60%, sys=1.06%, ctx=1275, majf=0, minf=9 00:21:18.808 IO depths : 1=1.3%, 2=3.2%, 4=10.9%, 8=72.7%, 16=11.8%, 32=0.0%, >=64=0.0% 00:21:18.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.808 complete : 0=0.0%, 4=90.2%, 8=4.8%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.808 issued rwts: total=2110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:18.808 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:18.808 filename1: (groupid=0, jobs=1): err= 0: pid=90701: Wed Apr 17 16:32:52 2024 00:21:18.808 read: IOPS=215, BW=862KiB/s (883kB/s)(8660KiB/10044msec) 00:21:18.808 slat (usec): min=5, max=8027, avg=17.23, stdev=203.16 00:21:18.808 clat (msec): min=38, max=143, avg=74.09, stdev=19.79 00:21:18.808 lat (msec): min=38, max=144, avg=74.11, stdev=19.79 00:21:18.808 clat percentiles (msec): 00:21:18.808 | 1.00th=[ 44], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 57], 00:21:18.808 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 79], 00:21:18.808 | 70.00th=[ 84], 80.00th=[ 85], 90.00th=[ 101], 95.00th=[ 114], 00:21:18.808 | 99.00th=[ 133], 99.50th=[ 134], 99.90th=[ 144], 99.95th=[ 144], 00:21:18.808 | 99.99th=[ 144] 00:21:18.808 bw ( KiB/s): min= 656, max= 1072, per=4.43%, avg=859.55, stdev=105.59, samples=20 00:21:18.808 iops : min= 164, max= 268, avg=214.85, stdev=26.39, samples=20 00:21:18.808 lat (msec) : 50=13.35%, 100=76.58%, 250=10.07% 00:21:18.808 cpu : usr=35.31%, sys=1.12%, ctx=1011, majf=0, minf=9 00:21:18.808 IO depths : 1=1.1%, 2=2.8%, 4=10.4%, 8=73.4%, 16=12.3%, 32=0.0%, >=64=0.0% 00:21:18.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.808 complete : 0=0.0%, 4=90.3%, 8=5.0%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.808 issued rwts: total=2165,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:18.808 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:18.808 filename1: (groupid=0, jobs=1): err= 0: pid=90702: Wed Apr 17 16:32:52 2024 00:21:18.808 read: IOPS=199, BW=798KiB/s (817kB/s)(8012KiB/10043msec) 00:21:18.808 slat (usec): min=5, max=8024, avg=17.36, stdev=200.24 00:21:18.808 clat (msec): min=35, max=151, avg=80.10, stdev=22.02 00:21:18.808 lat (msec): min=35, max=151, avg=80.12, stdev=22.02 00:21:18.808 clat percentiles (msec): 00:21:18.808 | 1.00th=[ 41], 5.00th=[ 48], 10.00th=[ 53], 20.00th=[ 61], 00:21:18.808 | 30.00th=[ 69], 40.00th=[ 73], 50.00th=[ 78], 60.00th=[ 83], 00:21:18.808 | 70.00th=[ 90], 80.00th=[ 99], 90.00th=[ 110], 95.00th=[ 122], 00:21:18.808 | 99.00th=[ 142], 99.50th=[ 146], 99.90th=[ 153], 99.95th=[ 153], 00:21:18.808 | 99.99th=[ 153] 00:21:18.808 bw ( KiB/s): min= 600, max= 1024, per=4.09%, avg=794.75, stdev=111.14, samples=20 00:21:18.808 iops : min= 150, max= 256, avg=198.65, stdev=27.82, samples=20 00:21:18.808 lat (msec) : 50=7.29%, 100=74.59%, 250=18.12% 00:21:18.808 cpu : usr=41.63%, sys=1.09%, ctx=1305, majf=0, minf=9 00:21:18.808 IO depths : 1=1.6%, 2=3.7%, 4=11.8%, 8=71.0%, 16=11.7%, 32=0.0%, >=64=0.0% 00:21:18.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.808 complete : 0=0.0%, 4=90.4%, 8=4.8%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.808 issued rwts: total=2003,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:18.808 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:18.808 filename1: (groupid=0, jobs=1): err= 0: pid=90703: Wed Apr 17 16:32:52 2024 00:21:18.808 read: IOPS=198, BW=794KiB/s (813kB/s)(7972KiB/10045msec) 00:21:18.808 slat (usec): min=4, max=11069, avg=21.15, stdev=305.85 00:21:18.808 clat (msec): min=34, max=155, avg=80.41, stdev=22.76 00:21:18.808 lat (msec): min=34, max=155, avg=80.43, stdev=22.78 00:21:18.808 clat percentiles (msec): 00:21:18.808 | 1.00th=[ 38], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 61], 00:21:18.808 | 30.00th=[ 69], 40.00th=[ 73], 50.00th=[ 80], 60.00th=[ 85], 00:21:18.808 | 70.00th=[ 89], 80.00th=[ 96], 90.00th=[ 109], 95.00th=[ 124], 00:21:18.808 | 99.00th=[ 142], 99.50th=[ 155], 99.90th=[ 155], 99.95th=[ 155], 00:21:18.808 | 99.99th=[ 155] 00:21:18.808 bw ( KiB/s): min= 552, max= 1024, per=4.07%, avg=790.50, stdev=124.49, samples=20 00:21:18.808 iops : min= 138, max= 256, avg=197.60, stdev=31.11, samples=20 00:21:18.808 lat (msec) : 50=9.38%, 100=72.91%, 250=17.71% 00:21:18.808 cpu : usr=37.52%, sys=1.05%, ctx=1049, majf=0, minf=9 00:21:18.808 IO depths : 1=1.4%, 2=2.9%, 4=10.5%, 8=73.1%, 16=12.2%, 32=0.0%, >=64=0.0% 00:21:18.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.808 complete : 0=0.0%, 4=89.8%, 8=5.7%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.808 issued rwts: total=1993,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:18.808 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:18.808 filename1: (groupid=0, jobs=1): err= 0: pid=90704: Wed Apr 17 16:32:52 2024 00:21:18.808 read: IOPS=232, BW=931KiB/s (953kB/s)(9308KiB/10001msec) 00:21:18.808 slat (usec): min=4, max=8072, avg=18.35, stdev=235.57 00:21:18.808 clat (msec): min=2, max=145, avg=68.66, stdev=23.80 00:21:18.808 lat (msec): min=2, max=145, avg=68.68, stdev=23.80 00:21:18.808 clat percentiles (msec): 00:21:18.808 | 1.00th=[ 4], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 51], 00:21:18.808 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 71], 60.00th=[ 72], 00:21:18.808 | 70.00th=[ 80], 80.00th=[ 85], 90.00th=[ 97], 95.00th=[ 108], 00:21:18.808 | 99.00th=[ 133], 99.50th=[ 144], 99.90th=[ 146], 99.95th=[ 146], 00:21:18.808 | 99.99th=[ 146] 00:21:18.808 bw ( KiB/s): min= 768, max= 1632, per=4.83%, avg=936.68, stdev=194.09, samples=19 00:21:18.808 iops : min= 192, max= 408, avg=234.16, stdev=48.51, samples=19 00:21:18.808 lat (msec) : 4=1.98%, 10=1.25%, 20=0.90%, 50=15.47%, 100=72.50% 00:21:18.808 lat (msec) : 250=7.91% 00:21:18.808 cpu : usr=32.36%, sys=0.96%, ctx=1058, majf=0, minf=0 00:21:18.808 IO depths : 1=0.8%, 2=1.6%, 4=7.1%, 8=77.7%, 16=12.8%, 32=0.0%, >=64=0.0% 00:21:18.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.808 complete : 0=0.0%, 4=89.3%, 8=6.2%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.808 issued rwts: total=2327,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:18.808 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:18.808 filename1: (groupid=0, jobs=1): err= 0: pid=90705: Wed Apr 17 16:32:52 2024 00:21:18.808 read: IOPS=238, BW=954KiB/s (977kB/s)(9620KiB/10080msec) 00:21:18.808 slat (usec): min=6, max=8024, avg=21.23, stdev=227.58 00:21:18.808 clat (msec): min=2, max=156, avg=66.71, stdev=23.26 00:21:18.808 lat (msec): min=2, max=156, avg=66.73, stdev=23.26 00:21:18.808 clat percentiles (msec): 00:21:18.808 | 1.00th=[ 4], 5.00th=[ 33], 10.00th=[ 47], 20.00th=[ 50], 00:21:18.808 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 67], 60.00th=[ 72], 00:21:18.808 | 70.00th=[ 75], 80.00th=[ 82], 90.00th=[ 96], 95.00th=[ 108], 00:21:18.808 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 157], 99.95th=[ 157], 00:21:18.808 | 99.99th=[ 157] 00:21:18.808 bw ( KiB/s): min= 768, max= 1792, per=4.92%, avg=955.40, stdev=221.77, samples=20 00:21:18.808 iops : min= 192, max= 448, avg=238.80, stdev=55.46, samples=20 00:21:18.808 lat (msec) : 4=2.66%, 10=2.00%, 50=15.88%, 100=72.22%, 250=7.23% 00:21:18.808 cpu : usr=41.33%, sys=1.14%, ctx=1174, majf=0, minf=0 00:21:18.808 IO depths : 1=1.5%, 2=3.1%, 4=10.3%, 8=73.1%, 16=12.0%, 32=0.0%, >=64=0.0% 00:21:18.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.808 complete : 0=0.0%, 4=90.1%, 8=5.3%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.808 issued rwts: total=2405,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:18.808 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:18.808 filename1: (groupid=0, jobs=1): err= 0: pid=90706: Wed Apr 17 16:32:52 2024 00:21:18.808 read: IOPS=186, BW=744KiB/s (762kB/s)(7472KiB/10040msec) 00:21:18.808 slat (usec): min=5, max=8026, avg=21.69, stdev=249.65 00:21:18.808 clat (msec): min=36, max=186, avg=85.81, stdev=22.49 00:21:18.808 lat (msec): min=36, max=186, avg=85.83, stdev=22.49 00:21:18.808 clat percentiles (msec): 00:21:18.808 | 1.00th=[ 43], 5.00th=[ 56], 10.00th=[ 61], 20.00th=[ 71], 00:21:18.808 | 30.00th=[ 74], 40.00th=[ 78], 50.00th=[ 82], 60.00th=[ 85], 00:21:18.808 | 70.00th=[ 95], 80.00th=[ 102], 90.00th=[ 114], 95.00th=[ 127], 00:21:18.808 | 99.00th=[ 159], 99.50th=[ 167], 99.90th=[ 188], 99.95th=[ 188], 00:21:18.808 | 99.99th=[ 188] 00:21:18.808 bw ( KiB/s): min= 512, max= 896, per=3.81%, avg=740.55, stdev=113.44, samples=20 00:21:18.809 iops : min= 128, max= 224, avg=185.10, stdev=28.32, samples=20 00:21:18.809 lat (msec) : 50=2.30%, 100=75.75%, 250=21.95% 00:21:18.809 cpu : usr=43.00%, sys=1.20%, ctx=1375, majf=0, minf=9 00:21:18.809 IO depths : 1=3.4%, 2=7.4%, 4=18.6%, 8=61.3%, 16=9.3%, 32=0.0%, >=64=0.0% 00:21:18.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.809 complete : 0=0.0%, 4=92.2%, 8=2.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.809 issued rwts: total=1868,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:18.809 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:18.809 filename1: (groupid=0, jobs=1): err= 0: pid=90707: Wed Apr 17 16:32:52 2024 00:21:18.809 read: IOPS=202, BW=809KiB/s (829kB/s)(8148KiB/10066msec) 00:21:18.809 slat (usec): min=4, max=8018, avg=15.66, stdev=177.47 00:21:18.809 clat (msec): min=5, max=178, avg=78.86, stdev=25.83 00:21:18.809 lat (msec): min=5, max=178, avg=78.88, stdev=25.83 00:21:18.809 clat percentiles (msec): 00:21:18.809 | 1.00th=[ 10], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 59], 00:21:18.809 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 84], 00:21:18.809 | 70.00th=[ 86], 80.00th=[ 97], 90.00th=[ 108], 95.00th=[ 121], 00:21:18.809 | 99.00th=[ 157], 99.50th=[ 161], 99.90th=[ 180], 99.95th=[ 180], 00:21:18.809 | 99.99th=[ 180] 00:21:18.809 bw ( KiB/s): min= 544, max= 1272, per=4.16%, avg=807.90, stdev=173.34, samples=20 00:21:18.809 iops : min= 136, max= 318, avg=201.95, stdev=43.34, samples=20 00:21:18.809 lat (msec) : 10=1.67%, 20=0.69%, 50=10.80%, 100=71.13%, 250=15.71% 00:21:18.809 cpu : usr=32.33%, sys=0.88%, ctx=973, majf=0, minf=9 00:21:18.809 IO depths : 1=1.7%, 2=3.8%, 4=13.5%, 8=69.8%, 16=11.3%, 32=0.0%, >=64=0.0% 00:21:18.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.809 complete : 0=0.0%, 4=91.0%, 8=3.8%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.809 issued rwts: total=2037,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:18.809 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:18.809 filename2: (groupid=0, jobs=1): err= 0: pid=90708: Wed Apr 17 16:32:52 2024 00:21:18.809 read: IOPS=202, BW=811KiB/s (831kB/s)(8148KiB/10041msec) 00:21:18.809 slat (usec): min=5, max=8027, avg=15.61, stdev=177.68 00:21:18.809 clat (msec): min=34, max=155, avg=78.77, stdev=23.85 00:21:18.809 lat (msec): min=34, max=155, avg=78.78, stdev=23.85 00:21:18.809 clat percentiles (msec): 00:21:18.809 | 1.00th=[ 40], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 59], 00:21:18.809 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 81], 00:21:18.809 | 70.00th=[ 87], 80.00th=[ 96], 90.00th=[ 109], 95.00th=[ 122], 00:21:18.809 | 99.00th=[ 146], 99.50th=[ 155], 99.90th=[ 157], 99.95th=[ 157], 00:21:18.809 | 99.99th=[ 157] 00:21:18.809 bw ( KiB/s): min= 512, max= 1024, per=4.17%, avg=808.40, stdev=125.86, samples=20 00:21:18.809 iops : min= 128, max= 256, avg=202.10, stdev=31.47, samples=20 00:21:18.809 lat (msec) : 50=12.42%, 100=69.37%, 250=18.21% 00:21:18.809 cpu : usr=34.50%, sys=1.02%, ctx=980, majf=0, minf=9 00:21:18.809 IO depths : 1=1.3%, 2=3.0%, 4=10.9%, 8=72.8%, 16=12.0%, 32=0.0%, >=64=0.0% 00:21:18.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.809 complete : 0=0.0%, 4=90.4%, 8=4.7%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.809 issued rwts: total=2037,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:18.809 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:18.809 filename2: (groupid=0, jobs=1): err= 0: pid=90709: Wed Apr 17 16:32:52 2024 00:21:18.809 read: IOPS=213, BW=856KiB/s (876kB/s)(8596KiB/10044msec) 00:21:18.809 slat (usec): min=3, max=8020, avg=20.53, stdev=258.90 00:21:18.809 clat (msec): min=36, max=183, avg=74.58, stdev=23.56 00:21:18.809 lat (msec): min=36, max=183, avg=74.60, stdev=23.55 00:21:18.809 clat percentiles (msec): 00:21:18.809 | 1.00th=[ 40], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 56], 00:21:18.809 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 74], 00:21:18.809 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 118], 00:21:18.809 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 184], 99.95th=[ 184], 00:21:18.809 | 99.99th=[ 184] 00:21:18.809 bw ( KiB/s): min= 640, max= 1152, per=4.40%, avg=853.15, stdev=140.27, samples=20 00:21:18.809 iops : min= 160, max= 288, avg=213.25, stdev=35.06, samples=20 00:21:18.809 lat (msec) : 50=16.01%, 100=66.82%, 250=17.17% 00:21:18.809 cpu : usr=35.78%, sys=0.99%, ctx=1079, majf=0, minf=9 00:21:18.809 IO depths : 1=1.3%, 2=3.0%, 4=11.0%, 8=72.8%, 16=11.8%, 32=0.0%, >=64=0.0% 00:21:18.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.809 complete : 0=0.0%, 4=90.1%, 8=5.0%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.809 issued rwts: total=2149,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:18.809 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:18.809 filename2: (groupid=0, jobs=1): err= 0: pid=90710: Wed Apr 17 16:32:52 2024 00:21:18.809 read: IOPS=238, BW=955KiB/s (978kB/s)(9612KiB/10065msec) 00:21:18.809 slat (usec): min=5, max=5051, avg=17.06, stdev=152.55 00:21:18.809 clat (msec): min=9, max=184, avg=66.87, stdev=22.30 00:21:18.809 lat (msec): min=9, max=184, avg=66.89, stdev=22.29 00:21:18.809 clat percentiles (msec): 00:21:18.809 | 1.00th=[ 16], 5.00th=[ 42], 10.00th=[ 46], 20.00th=[ 51], 00:21:18.809 | 30.00th=[ 55], 40.00th=[ 58], 50.00th=[ 63], 60.00th=[ 69], 00:21:18.809 | 70.00th=[ 73], 80.00th=[ 82], 90.00th=[ 95], 95.00th=[ 106], 00:21:18.809 | 99.00th=[ 142], 99.50th=[ 161], 99.90th=[ 186], 99.95th=[ 186], 00:21:18.809 | 99.99th=[ 186] 00:21:18.809 bw ( KiB/s): min= 688, max= 1197, per=4.92%, avg=954.50, stdev=135.48, samples=20 00:21:18.809 iops : min= 172, max= 299, avg=238.60, stdev=33.83, samples=20 00:21:18.809 lat (msec) : 10=0.42%, 20=0.92%, 50=18.93%, 100=72.74%, 250=6.99% 00:21:18.809 cpu : usr=44.02%, sys=1.32%, ctx=1450, majf=0, minf=9 00:21:18.809 IO depths : 1=0.8%, 2=1.6%, 4=7.5%, 8=77.5%, 16=12.6%, 32=0.0%, >=64=0.0% 00:21:18.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.809 complete : 0=0.0%, 4=89.4%, 8=5.9%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.809 issued rwts: total=2403,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:18.809 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:18.809 filename2: (groupid=0, jobs=1): err= 0: pid=90711: Wed Apr 17 16:32:52 2024 00:21:18.809 read: IOPS=202, BW=809KiB/s (829kB/s)(8132KiB/10050msec) 00:21:18.809 slat (usec): min=5, max=9029, avg=31.79, stdev=408.33 00:21:18.809 clat (msec): min=32, max=156, avg=78.87, stdev=24.07 00:21:18.809 lat (msec): min=32, max=156, avg=78.90, stdev=24.08 00:21:18.809 clat percentiles (msec): 00:21:18.809 | 1.00th=[ 44], 5.00th=[ 47], 10.00th=[ 50], 20.00th=[ 59], 00:21:18.809 | 30.00th=[ 62], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 84], 00:21:18.809 | 70.00th=[ 86], 80.00th=[ 97], 90.00th=[ 111], 95.00th=[ 121], 00:21:18.809 | 99.00th=[ 144], 99.50th=[ 157], 99.90th=[ 157], 99.95th=[ 157], 00:21:18.809 | 99.99th=[ 157] 00:21:18.809 bw ( KiB/s): min= 640, max= 1024, per=4.16%, avg=806.70, stdev=107.10, samples=20 00:21:18.809 iops : min= 160, max= 256, avg=201.65, stdev=26.77, samples=20 00:21:18.809 lat (msec) : 50=10.92%, 100=70.19%, 250=18.89% 00:21:18.809 cpu : usr=34.94%, sys=0.98%, ctx=1089, majf=0, minf=9 00:21:18.809 IO depths : 1=1.7%, 2=3.5%, 4=11.7%, 8=71.5%, 16=11.6%, 32=0.0%, >=64=0.0% 00:21:18.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.809 complete : 0=0.0%, 4=90.2%, 8=5.0%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.809 issued rwts: total=2033,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:18.809 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:18.809 filename2: (groupid=0, jobs=1): err= 0: pid=90712: Wed Apr 17 16:32:52 2024 00:21:18.809 read: IOPS=236, BW=945KiB/s (967kB/s)(9492KiB/10047msec) 00:21:18.809 slat (usec): min=4, max=8025, avg=16.62, stdev=184.02 00:21:18.809 clat (msec): min=34, max=132, avg=67.59, stdev=19.05 00:21:18.809 lat (msec): min=34, max=132, avg=67.60, stdev=19.05 00:21:18.809 clat percentiles (msec): 00:21:18.809 | 1.00th=[ 40], 5.00th=[ 45], 10.00th=[ 47], 20.00th=[ 51], 00:21:18.809 | 30.00th=[ 55], 40.00th=[ 59], 50.00th=[ 63], 60.00th=[ 71], 00:21:18.809 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 108], 00:21:18.809 | 99.00th=[ 118], 99.50th=[ 131], 99.90th=[ 131], 99.95th=[ 133], 00:21:18.809 | 99.99th=[ 133] 00:21:18.809 bw ( KiB/s): min= 764, max= 1232, per=4.86%, avg=942.50, stdev=130.06, samples=20 00:21:18.809 iops : min= 191, max= 308, avg=235.60, stdev=32.51, samples=20 00:21:18.809 lat (msec) : 50=20.40%, 100=72.52%, 250=7.08% 00:21:18.809 cpu : usr=41.29%, sys=1.29%, ctx=1071, majf=0, minf=9 00:21:18.809 IO depths : 1=0.8%, 2=1.6%, 4=7.7%, 8=77.4%, 16=12.5%, 32=0.0%, >=64=0.0% 00:21:18.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.809 complete : 0=0.0%, 4=89.4%, 8=5.8%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.809 issued rwts: total=2373,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:18.809 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:18.809 filename2: (groupid=0, jobs=1): err= 0: pid=90713: Wed Apr 17 16:32:52 2024 00:21:18.809 read: IOPS=179, BW=718KiB/s (735kB/s)(7208KiB/10044msec) 00:21:18.809 slat (usec): min=4, max=8030, avg=22.22, stdev=283.27 00:21:18.809 clat (msec): min=40, max=158, avg=88.98, stdev=22.70 00:21:18.809 lat (msec): min=40, max=158, avg=89.01, stdev=22.69 00:21:18.809 clat percentiles (msec): 00:21:18.809 | 1.00th=[ 46], 5.00th=[ 51], 10.00th=[ 61], 20.00th=[ 72], 00:21:18.809 | 30.00th=[ 75], 40.00th=[ 83], 50.00th=[ 85], 60.00th=[ 95], 00:21:18.809 | 70.00th=[ 100], 80.00th=[ 108], 90.00th=[ 118], 95.00th=[ 132], 00:21:18.809 | 99.00th=[ 155], 99.50th=[ 157], 99.90th=[ 159], 99.95th=[ 159], 00:21:18.809 | 99.99th=[ 159] 00:21:18.809 bw ( KiB/s): min= 512, max= 944, per=3.68%, avg=714.40, stdev=103.38, samples=20 00:21:18.809 iops : min= 128, max= 236, avg=178.55, stdev=25.86, samples=20 00:21:18.809 lat (msec) : 50=4.77%, 100=67.20%, 250=28.02% 00:21:18.809 cpu : usr=33.41%, sys=1.06%, ctx=889, majf=0, minf=9 00:21:18.809 IO depths : 1=2.9%, 2=6.3%, 4=15.9%, 8=65.1%, 16=9.8%, 32=0.0%, >=64=0.0% 00:21:18.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.809 complete : 0=0.0%, 4=91.6%, 8=3.0%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.809 issued rwts: total=1802,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:18.809 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:18.809 filename2: (groupid=0, jobs=1): err= 0: pid=90714: Wed Apr 17 16:32:52 2024 00:21:18.809 read: IOPS=185, BW=742KiB/s (760kB/s)(7448KiB/10038msec) 00:21:18.809 slat (usec): min=5, max=8041, avg=22.51, stdev=278.89 00:21:18.809 clat (msec): min=42, max=180, avg=86.06, stdev=23.18 00:21:18.809 lat (msec): min=42, max=180, avg=86.09, stdev=23.17 00:21:18.809 clat percentiles (msec): 00:21:18.809 | 1.00th=[ 47], 5.00th=[ 53], 10.00th=[ 58], 20.00th=[ 70], 00:21:18.809 | 30.00th=[ 74], 40.00th=[ 78], 50.00th=[ 82], 60.00th=[ 87], 00:21:18.810 | 70.00th=[ 94], 80.00th=[ 109], 90.00th=[ 115], 95.00th=[ 129], 00:21:18.810 | 99.00th=[ 163], 99.50th=[ 167], 99.90th=[ 180], 99.95th=[ 180], 00:21:18.810 | 99.99th=[ 180] 00:21:18.810 bw ( KiB/s): min= 512, max= 1024, per=3.80%, avg=738.40, stdev=123.78, samples=20 00:21:18.810 iops : min= 128, max= 256, avg=184.60, stdev=30.94, samples=20 00:21:18.810 lat (msec) : 50=4.08%, 100=69.12%, 250=26.80% 00:21:18.810 cpu : usr=42.04%, sys=1.12%, ctx=1238, majf=0, minf=9 00:21:18.810 IO depths : 1=3.2%, 2=6.8%, 4=16.9%, 8=63.6%, 16=9.5%, 32=0.0%, >=64=0.0% 00:21:18.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.810 complete : 0=0.0%, 4=91.8%, 8=2.6%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.810 issued rwts: total=1862,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:18.810 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:18.810 filename2: (groupid=0, jobs=1): err= 0: pid=90715: Wed Apr 17 16:32:52 2024 00:21:18.810 read: IOPS=189, BW=758KiB/s (776kB/s)(7612KiB/10042msec) 00:21:18.810 slat (usec): min=5, max=8026, avg=22.13, stdev=275.44 00:21:18.810 clat (msec): min=37, max=155, avg=84.22, stdev=23.69 00:21:18.810 lat (msec): min=37, max=155, avg=84.24, stdev=23.70 00:21:18.810 clat percentiles (msec): 00:21:18.810 | 1.00th=[ 42], 5.00th=[ 48], 10.00th=[ 58], 20.00th=[ 63], 00:21:18.810 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 83], 60.00th=[ 86], 00:21:18.810 | 70.00th=[ 96], 80.00th=[ 106], 90.00th=[ 120], 95.00th=[ 131], 00:21:18.810 | 99.00th=[ 144], 99.50th=[ 157], 99.90th=[ 157], 99.95th=[ 157], 00:21:18.810 | 99.99th=[ 157] 00:21:18.810 bw ( KiB/s): min= 512, max= 976, per=3.89%, avg=754.80, stdev=114.09, samples=20 00:21:18.810 iops : min= 128, max= 244, avg=188.70, stdev=28.52, samples=20 00:21:18.810 lat (msec) : 50=6.04%, 100=69.21%, 250=24.75% 00:21:18.810 cpu : usr=35.69%, sys=1.02%, ctx=1003, majf=0, minf=9 00:21:18.810 IO depths : 1=2.0%, 2=4.3%, 4=12.6%, 8=69.7%, 16=11.4%, 32=0.0%, >=64=0.0% 00:21:18.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.810 complete : 0=0.0%, 4=90.6%, 8=4.6%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.810 issued rwts: total=1903,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:18.810 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:18.810 00:21:18.810 Run status group 0 (all jobs): 00:21:18.810 READ: bw=18.9MiB/s (19.9MB/s), 687KiB/s-955KiB/s (703kB/s-978kB/s), io=191MiB (200MB), run=10001-10080msec 00:21:19.069 16:32:52 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:21:19.069 16:32:52 -- target/dif.sh@43 -- # local sub 00:21:19.069 16:32:52 -- target/dif.sh@45 -- # for sub in "$@" 00:21:19.069 16:32:52 -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:19.069 16:32:52 -- target/dif.sh@36 -- # local sub_id=0 00:21:19.069 16:32:52 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:19.069 16:32:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.069 16:32:52 -- common/autotest_common.sh@10 -- # set +x 00:21:19.069 16:32:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.069 16:32:53 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:19.069 16:32:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.069 16:32:53 -- common/autotest_common.sh@10 -- # set +x 00:21:19.069 16:32:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.069 16:32:53 -- target/dif.sh@45 -- # for sub in "$@" 00:21:19.069 16:32:53 -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:19.069 16:32:53 -- target/dif.sh@36 -- # local sub_id=1 00:21:19.069 16:32:53 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:19.069 16:32:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.069 16:32:53 -- common/autotest_common.sh@10 -- # set +x 00:21:19.069 16:32:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.069 16:32:53 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:19.069 16:32:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.069 16:32:53 -- common/autotest_common.sh@10 -- # set +x 00:21:19.069 16:32:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.069 16:32:53 -- target/dif.sh@45 -- # for sub in "$@" 00:21:19.069 16:32:53 -- target/dif.sh@46 -- # destroy_subsystem 2 00:21:19.069 16:32:53 -- target/dif.sh@36 -- # local sub_id=2 00:21:19.069 16:32:53 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:19.069 16:32:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.069 16:32:53 -- common/autotest_common.sh@10 -- # set +x 00:21:19.069 16:32:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.069 16:32:53 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:21:19.069 16:32:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.069 16:32:53 -- common/autotest_common.sh@10 -- # set +x 00:21:19.069 16:32:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.069 16:32:53 -- target/dif.sh@115 -- # NULL_DIF=1 00:21:19.069 16:32:53 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:21:19.069 16:32:53 -- target/dif.sh@115 -- # numjobs=2 00:21:19.069 16:32:53 -- target/dif.sh@115 -- # iodepth=8 00:21:19.069 16:32:53 -- target/dif.sh@115 -- # runtime=5 00:21:19.069 16:32:53 -- target/dif.sh@115 -- # files=1 00:21:19.069 16:32:53 -- target/dif.sh@117 -- # create_subsystems 0 1 00:21:19.069 16:32:53 -- target/dif.sh@28 -- # local sub 00:21:19.069 16:32:53 -- target/dif.sh@30 -- # for sub in "$@" 00:21:19.069 16:32:53 -- target/dif.sh@31 -- # create_subsystem 0 00:21:19.069 16:32:53 -- target/dif.sh@18 -- # local sub_id=0 00:21:19.069 16:32:53 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:21:19.069 16:32:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.069 16:32:53 -- common/autotest_common.sh@10 -- # set +x 00:21:19.069 bdev_null0 00:21:19.069 16:32:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.069 16:32:53 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:19.069 16:32:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.069 16:32:53 -- common/autotest_common.sh@10 -- # set +x 00:21:19.069 16:32:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.069 16:32:53 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:19.069 16:32:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.069 16:32:53 -- common/autotest_common.sh@10 -- # set +x 00:21:19.069 16:32:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.069 16:32:53 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:19.069 16:32:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.069 16:32:53 -- common/autotest_common.sh@10 -- # set +x 00:21:19.069 [2024-04-17 16:32:53.074512] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:19.069 16:32:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.069 16:32:53 -- target/dif.sh@30 -- # for sub in "$@" 00:21:19.069 16:32:53 -- target/dif.sh@31 -- # create_subsystem 1 00:21:19.069 16:32:53 -- target/dif.sh@18 -- # local sub_id=1 00:21:19.069 16:32:53 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:21:19.069 16:32:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.069 16:32:53 -- common/autotest_common.sh@10 -- # set +x 00:21:19.069 bdev_null1 00:21:19.069 16:32:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.069 16:32:53 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:21:19.069 16:32:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.069 16:32:53 -- common/autotest_common.sh@10 -- # set +x 00:21:19.069 16:32:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.069 16:32:53 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:21:19.069 16:32:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.069 16:32:53 -- common/autotest_common.sh@10 -- # set +x 00:21:19.069 16:32:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.069 16:32:53 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:19.069 16:32:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.069 16:32:53 -- common/autotest_common.sh@10 -- # set +x 00:21:19.331 16:32:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.331 16:32:53 -- target/dif.sh@118 -- # fio /dev/fd/62 00:21:19.331 16:32:53 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:21:19.331 16:32:53 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:21:19.331 16:32:53 -- nvmf/common.sh@521 -- # config=() 00:21:19.331 16:32:53 -- nvmf/common.sh@521 -- # local subsystem config 00:21:19.331 16:32:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:19.331 16:32:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:19.331 { 00:21:19.331 "params": { 00:21:19.331 "name": "Nvme$subsystem", 00:21:19.331 "trtype": "$TEST_TRANSPORT", 00:21:19.331 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:19.331 "adrfam": "ipv4", 00:21:19.331 "trsvcid": "$NVMF_PORT", 00:21:19.331 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:19.331 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:19.331 "hdgst": ${hdgst:-false}, 00:21:19.331 "ddgst": ${ddgst:-false} 00:21:19.331 }, 00:21:19.331 "method": "bdev_nvme_attach_controller" 00:21:19.331 } 00:21:19.331 EOF 00:21:19.331 )") 00:21:19.331 16:32:53 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:19.331 16:32:53 -- target/dif.sh@82 -- # gen_fio_conf 00:21:19.331 16:32:53 -- target/dif.sh@54 -- # local file 00:21:19.331 16:32:53 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:19.331 16:32:53 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:21:19.331 16:32:53 -- nvmf/common.sh@543 -- # cat 00:21:19.331 16:32:53 -- target/dif.sh@56 -- # cat 00:21:19.331 16:32:53 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:19.331 16:32:53 -- common/autotest_common.sh@1325 -- # local sanitizers 00:21:19.331 16:32:53 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:19.331 16:32:53 -- common/autotest_common.sh@1327 -- # shift 00:21:19.331 16:32:53 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:21:19.331 16:32:53 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:21:19.331 16:32:53 -- target/dif.sh@72 -- # (( file = 1 )) 00:21:19.331 16:32:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:19.331 16:32:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:19.331 { 00:21:19.331 "params": { 00:21:19.331 "name": "Nvme$subsystem", 00:21:19.331 "trtype": "$TEST_TRANSPORT", 00:21:19.331 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:19.331 "adrfam": "ipv4", 00:21:19.331 "trsvcid": "$NVMF_PORT", 00:21:19.331 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:19.331 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:19.331 "hdgst": ${hdgst:-false}, 00:21:19.331 "ddgst": ${ddgst:-false} 00:21:19.331 }, 00:21:19.331 "method": "bdev_nvme_attach_controller" 00:21:19.331 } 00:21:19.331 EOF 00:21:19.331 )") 00:21:19.331 16:32:53 -- target/dif.sh@72 -- # (( file <= files )) 00:21:19.331 16:32:53 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:19.331 16:32:53 -- target/dif.sh@73 -- # cat 00:21:19.331 16:32:53 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:21:19.331 16:32:53 -- nvmf/common.sh@543 -- # cat 00:21:19.331 16:32:53 -- common/autotest_common.sh@1331 -- # grep libasan 00:21:19.331 16:32:53 -- target/dif.sh@72 -- # (( file++ )) 00:21:19.331 16:32:53 -- target/dif.sh@72 -- # (( file <= files )) 00:21:19.331 16:32:53 -- nvmf/common.sh@545 -- # jq . 00:21:19.331 16:32:53 -- nvmf/common.sh@546 -- # IFS=, 00:21:19.331 16:32:53 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:19.331 "params": { 00:21:19.331 "name": "Nvme0", 00:21:19.331 "trtype": "tcp", 00:21:19.331 "traddr": "10.0.0.2", 00:21:19.331 "adrfam": "ipv4", 00:21:19.331 "trsvcid": "4420", 00:21:19.331 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:19.331 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:19.331 "hdgst": false, 00:21:19.331 "ddgst": false 00:21:19.331 }, 00:21:19.331 "method": "bdev_nvme_attach_controller" 00:21:19.331 },{ 00:21:19.331 "params": { 00:21:19.331 "name": "Nvme1", 00:21:19.331 "trtype": "tcp", 00:21:19.331 "traddr": "10.0.0.2", 00:21:19.331 "adrfam": "ipv4", 00:21:19.331 "trsvcid": "4420", 00:21:19.331 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:19.331 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:19.331 "hdgst": false, 00:21:19.331 "ddgst": false 00:21:19.331 }, 00:21:19.331 "method": "bdev_nvme_attach_controller" 00:21:19.331 }' 00:21:19.331 16:32:53 -- common/autotest_common.sh@1331 -- # asan_lib= 00:21:19.331 16:32:53 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:21:19.331 16:32:53 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:21:19.331 16:32:53 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:21:19.331 16:32:53 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:19.331 16:32:53 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:21:19.331 16:32:53 -- common/autotest_common.sh@1331 -- # asan_lib= 00:21:19.331 16:32:53 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:21:19.331 16:32:53 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:19.331 16:32:53 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:19.331 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:21:19.331 ... 00:21:19.331 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:21:19.331 ... 00:21:19.332 fio-3.35 00:21:19.332 Starting 4 threads 00:21:19.898 [2024-04-17 16:32:53.856441] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:21:19.898 [2024-04-17 16:32:53.856529] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:21:25.162 00:21:25.162 filename0: (groupid=0, jobs=1): err= 0: pid=90847: Wed Apr 17 16:32:58 2024 00:21:25.162 read: IOPS=1910, BW=14.9MiB/s (15.6MB/s)(74.6MiB/5002msec) 00:21:25.162 slat (nsec): min=7460, max=41609, avg=13109.54, stdev=4379.24 00:21:25.162 clat (usec): min=1456, max=8057, avg=4147.04, stdev=180.97 00:21:25.162 lat (usec): min=1464, max=8083, avg=4160.15, stdev=180.94 00:21:25.162 clat percentiles (usec): 00:21:25.162 | 1.00th=[ 3556], 5.00th=[ 4080], 10.00th=[ 4080], 20.00th=[ 4113], 00:21:25.162 | 30.00th=[ 4113], 40.00th=[ 4146], 50.00th=[ 4146], 60.00th=[ 4146], 00:21:25.162 | 70.00th=[ 4178], 80.00th=[ 4178], 90.00th=[ 4228], 95.00th=[ 4228], 00:21:25.162 | 99.00th=[ 4817], 99.50th=[ 4817], 99.90th=[ 4948], 99.95th=[ 7242], 00:21:25.162 | 99.99th=[ 8029] 00:21:25.162 bw ( KiB/s): min=15182, max=15360, per=24.97%, avg=15269.11, stdev=54.65, samples=9 00:21:25.162 iops : min= 1897, max= 1920, avg=1908.56, stdev= 6.98, samples=9 00:21:25.162 lat (msec) : 2=0.03%, 4=1.90%, 10=98.06% 00:21:25.162 cpu : usr=94.64%, sys=4.24%, ctx=12, majf=0, minf=9 00:21:25.162 IO depths : 1=0.1%, 2=0.1%, 4=74.9%, 8=25.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:25.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:25.162 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:25.162 issued rwts: total=9555,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:25.162 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:25.162 filename0: (groupid=0, jobs=1): err= 0: pid=90848: Wed Apr 17 16:32:58 2024 00:21:25.162 read: IOPS=1912, BW=14.9MiB/s (15.7MB/s)(74.8MiB/5002msec) 00:21:25.162 slat (usec): min=4, max=108, avg= 9.02, stdev= 3.17 00:21:25.162 clat (usec): min=1309, max=5571, avg=4135.17, stdev=162.07 00:21:25.162 lat (usec): min=1318, max=5584, avg=4144.19, stdev=161.93 00:21:25.162 clat percentiles (usec): 00:21:25.162 | 1.00th=[ 3687], 5.00th=[ 4080], 10.00th=[ 4080], 20.00th=[ 4113], 00:21:25.162 | 30.00th=[ 4113], 40.00th=[ 4113], 50.00th=[ 4146], 60.00th=[ 4146], 00:21:25.162 | 70.00th=[ 4146], 80.00th=[ 4178], 90.00th=[ 4228], 95.00th=[ 4228], 00:21:25.162 | 99.00th=[ 4621], 99.50th=[ 4621], 99.90th=[ 5473], 99.95th=[ 5538], 00:21:25.162 | 99.99th=[ 5604] 00:21:25.162 bw ( KiB/s): min=15201, max=15488, per=25.02%, avg=15299.67, stdev=96.46, samples=9 00:21:25.162 iops : min= 1900, max= 1936, avg=1912.44, stdev=12.07, samples=9 00:21:25.162 lat (msec) : 2=0.17%, 4=2.07%, 10=97.76% 00:21:25.162 cpu : usr=93.74%, sys=4.94%, ctx=16, majf=0, minf=0 00:21:25.162 IO depths : 1=10.8%, 2=25.0%, 4=50.0%, 8=14.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:25.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:25.162 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:25.162 issued rwts: total=9568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:25.162 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:25.162 filename1: (groupid=0, jobs=1): err= 0: pid=90849: Wed Apr 17 16:32:58 2024 00:21:25.162 read: IOPS=1911, BW=14.9MiB/s (15.7MB/s)(74.7MiB/5002msec) 00:21:25.162 slat (usec): min=4, max=129, avg=13.93, stdev= 4.76 00:21:25.162 clat (usec): min=2168, max=5687, avg=4112.41, stdev=101.00 00:21:25.162 lat (usec): min=2175, max=5695, avg=4126.34, stdev=101.76 00:21:25.162 clat percentiles (usec): 00:21:25.162 | 1.00th=[ 4015], 5.00th=[ 4047], 10.00th=[ 4047], 20.00th=[ 4080], 00:21:25.162 | 30.00th=[ 4080], 40.00th=[ 4113], 50.00th=[ 4113], 60.00th=[ 4113], 00:21:25.162 | 70.00th=[ 4146], 80.00th=[ 4146], 90.00th=[ 4178], 95.00th=[ 4228], 00:21:25.162 | 99.00th=[ 4293], 99.50th=[ 4359], 99.90th=[ 5080], 99.95th=[ 5145], 00:21:25.162 | 99.99th=[ 5669] 00:21:25.162 bw ( KiB/s): min=15232, max=15360, per=24.98%, avg=15274.67, stdev=64.00, samples=9 00:21:25.162 iops : min= 1904, max= 1920, avg=1909.33, stdev= 8.00, samples=9 00:21:25.162 lat (msec) : 4=1.13%, 10=98.87% 00:21:25.162 cpu : usr=94.26%, sys=4.50%, ctx=6, majf=0, minf=0 00:21:25.162 IO depths : 1=12.3%, 2=25.0%, 4=50.0%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:25.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:25.162 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:25.162 issued rwts: total=9560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:25.162 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:25.162 filename1: (groupid=0, jobs=1): err= 0: pid=90850: Wed Apr 17 16:32:58 2024 00:21:25.162 read: IOPS=1910, BW=14.9MiB/s (15.6MB/s)(74.6MiB/5001msec) 00:21:25.162 slat (usec): min=5, max=2484, avg=12.44, stdev=25.73 00:21:25.162 clat (usec): min=2545, max=6782, avg=4131.71, stdev=126.62 00:21:25.162 lat (usec): min=2557, max=6790, avg=4144.15, stdev=125.70 00:21:25.162 clat percentiles (usec): 00:21:25.162 | 1.00th=[ 4015], 5.00th=[ 4047], 10.00th=[ 4080], 20.00th=[ 4080], 00:21:25.162 | 30.00th=[ 4113], 40.00th=[ 4113], 50.00th=[ 4146], 60.00th=[ 4146], 00:21:25.162 | 70.00th=[ 4146], 80.00th=[ 4178], 90.00th=[ 4228], 95.00th=[ 4228], 00:21:25.162 | 99.00th=[ 4359], 99.50th=[ 4621], 99.90th=[ 5276], 99.95th=[ 6718], 00:21:25.162 | 99.99th=[ 6783] 00:21:25.162 bw ( KiB/s): min=15232, max=15360, per=24.98%, avg=15274.67, stdev=64.00, samples=9 00:21:25.162 iops : min= 1904, max= 1920, avg=1909.33, stdev= 8.00, samples=9 00:21:25.162 lat (msec) : 4=1.04%, 10=98.96% 00:21:25.162 cpu : usr=93.54%, sys=5.16%, ctx=41, majf=0, minf=0 00:21:25.162 IO depths : 1=12.2%, 2=25.0%, 4=50.0%, 8=12.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:25.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:25.162 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:25.162 issued rwts: total=9552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:25.162 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:25.162 00:21:25.162 Run status group 0 (all jobs): 00:21:25.162 READ: bw=59.7MiB/s (62.6MB/s), 14.9MiB/s-14.9MiB/s (15.6MB/s-15.7MB/s), io=299MiB (313MB), run=5001-5002msec 00:21:25.421 16:32:59 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:21:25.421 16:32:59 -- target/dif.sh@43 -- # local sub 00:21:25.421 16:32:59 -- target/dif.sh@45 -- # for sub in "$@" 00:21:25.421 16:32:59 -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:25.421 16:32:59 -- target/dif.sh@36 -- # local sub_id=0 00:21:25.421 16:32:59 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:25.421 16:32:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:25.421 16:32:59 -- common/autotest_common.sh@10 -- # set +x 00:21:25.421 16:32:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:25.421 16:32:59 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:25.421 16:32:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:25.421 16:32:59 -- common/autotest_common.sh@10 -- # set +x 00:21:25.421 16:32:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:25.421 16:32:59 -- target/dif.sh@45 -- # for sub in "$@" 00:21:25.421 16:32:59 -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:25.421 16:32:59 -- target/dif.sh@36 -- # local sub_id=1 00:21:25.421 16:32:59 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:25.421 16:32:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:25.421 16:32:59 -- common/autotest_common.sh@10 -- # set +x 00:21:25.421 16:32:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:25.421 16:32:59 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:25.421 16:32:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:25.421 16:32:59 -- common/autotest_common.sh@10 -- # set +x 00:21:25.421 ************************************ 00:21:25.421 END TEST fio_dif_rand_params 00:21:25.421 ************************************ 00:21:25.421 16:32:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:25.421 00:21:25.421 real 0m23.873s 00:21:25.421 user 2m6.146s 00:21:25.421 sys 0m5.332s 00:21:25.421 16:32:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:25.421 16:32:59 -- common/autotest_common.sh@10 -- # set +x 00:21:25.421 16:32:59 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:21:25.421 16:32:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:25.421 16:32:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:25.421 16:32:59 -- common/autotest_common.sh@10 -- # set +x 00:21:25.421 ************************************ 00:21:25.421 START TEST fio_dif_digest 00:21:25.421 ************************************ 00:21:25.421 16:32:59 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:21:25.421 16:32:59 -- target/dif.sh@123 -- # local NULL_DIF 00:21:25.421 16:32:59 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:21:25.421 16:32:59 -- target/dif.sh@125 -- # local hdgst ddgst 00:21:25.421 16:32:59 -- target/dif.sh@127 -- # NULL_DIF=3 00:21:25.421 16:32:59 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:21:25.421 16:32:59 -- target/dif.sh@127 -- # numjobs=3 00:21:25.421 16:32:59 -- target/dif.sh@127 -- # iodepth=3 00:21:25.421 16:32:59 -- target/dif.sh@127 -- # runtime=10 00:21:25.421 16:32:59 -- target/dif.sh@128 -- # hdgst=true 00:21:25.421 16:32:59 -- target/dif.sh@128 -- # ddgst=true 00:21:25.421 16:32:59 -- target/dif.sh@130 -- # create_subsystems 0 00:21:25.421 16:32:59 -- target/dif.sh@28 -- # local sub 00:21:25.421 16:32:59 -- target/dif.sh@30 -- # for sub in "$@" 00:21:25.421 16:32:59 -- target/dif.sh@31 -- # create_subsystem 0 00:21:25.421 16:32:59 -- target/dif.sh@18 -- # local sub_id=0 00:21:25.421 16:32:59 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:21:25.421 16:32:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:25.421 16:32:59 -- common/autotest_common.sh@10 -- # set +x 00:21:25.421 bdev_null0 00:21:25.421 16:32:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:25.421 16:32:59 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:25.421 16:32:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:25.421 16:32:59 -- common/autotest_common.sh@10 -- # set +x 00:21:25.421 16:32:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:25.421 16:32:59 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:25.421 16:32:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:25.421 16:32:59 -- common/autotest_common.sh@10 -- # set +x 00:21:25.421 16:32:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:25.421 16:32:59 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:25.421 16:32:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:25.421 16:32:59 -- common/autotest_common.sh@10 -- # set +x 00:21:25.421 [2024-04-17 16:32:59.453958] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:25.421 16:32:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:25.421 16:32:59 -- target/dif.sh@131 -- # fio /dev/fd/62 00:21:25.421 16:32:59 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:21:25.421 16:32:59 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:25.421 16:32:59 -- nvmf/common.sh@521 -- # config=() 00:21:25.421 16:32:59 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:25.421 16:32:59 -- nvmf/common.sh@521 -- # local subsystem config 00:21:25.421 16:32:59 -- target/dif.sh@82 -- # gen_fio_conf 00:21:25.421 16:32:59 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:25.421 16:32:59 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:25.421 16:32:59 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:25.421 { 00:21:25.421 "params": { 00:21:25.421 "name": "Nvme$subsystem", 00:21:25.421 "trtype": "$TEST_TRANSPORT", 00:21:25.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:25.422 "adrfam": "ipv4", 00:21:25.422 "trsvcid": "$NVMF_PORT", 00:21:25.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:25.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:25.422 "hdgst": ${hdgst:-false}, 00:21:25.422 "ddgst": ${ddgst:-false} 00:21:25.422 }, 00:21:25.422 "method": "bdev_nvme_attach_controller" 00:21:25.422 } 00:21:25.422 EOF 00:21:25.422 )") 00:21:25.422 16:32:59 -- target/dif.sh@54 -- # local file 00:21:25.422 16:32:59 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:21:25.422 16:32:59 -- target/dif.sh@56 -- # cat 00:21:25.422 16:32:59 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:25.422 16:32:59 -- common/autotest_common.sh@1325 -- # local sanitizers 00:21:25.422 16:32:59 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:25.422 16:32:59 -- common/autotest_common.sh@1327 -- # shift 00:21:25.422 16:32:59 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:21:25.422 16:32:59 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:21:25.422 16:32:59 -- nvmf/common.sh@543 -- # cat 00:21:25.422 16:32:59 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:25.422 16:32:59 -- common/autotest_common.sh@1331 -- # grep libasan 00:21:25.422 16:32:59 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:21:25.679 16:32:59 -- target/dif.sh@72 -- # (( file = 1 )) 00:21:25.679 16:32:59 -- target/dif.sh@72 -- # (( file <= files )) 00:21:25.679 16:32:59 -- nvmf/common.sh@545 -- # jq . 00:21:25.679 16:32:59 -- nvmf/common.sh@546 -- # IFS=, 00:21:25.679 16:32:59 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:25.679 "params": { 00:21:25.679 "name": "Nvme0", 00:21:25.679 "trtype": "tcp", 00:21:25.679 "traddr": "10.0.0.2", 00:21:25.679 "adrfam": "ipv4", 00:21:25.679 "trsvcid": "4420", 00:21:25.679 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:25.679 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:25.679 "hdgst": true, 00:21:25.679 "ddgst": true 00:21:25.679 }, 00:21:25.679 "method": "bdev_nvme_attach_controller" 00:21:25.679 }' 00:21:25.679 16:32:59 -- common/autotest_common.sh@1331 -- # asan_lib= 00:21:25.679 16:32:59 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:21:25.679 16:32:59 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:21:25.679 16:32:59 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:21:25.679 16:32:59 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:25.679 16:32:59 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:21:25.679 16:32:59 -- common/autotest_common.sh@1331 -- # asan_lib= 00:21:25.679 16:32:59 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:21:25.679 16:32:59 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:25.679 16:32:59 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:25.679 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:21:25.679 ... 00:21:25.679 fio-3.35 00:21:25.679 Starting 3 threads 00:21:26.245 [2024-04-17 16:33:00.050808] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:21:26.245 [2024-04-17 16:33:00.050894] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:21:36.215 00:21:36.215 filename0: (groupid=0, jobs=1): err= 0: pid=90964: Wed Apr 17 16:33:10 2024 00:21:36.215 read: IOPS=214, BW=26.8MiB/s (28.1MB/s)(268MiB/10005msec) 00:21:36.215 slat (nsec): min=7442, max=47082, avg=12837.44, stdev=3198.06 00:21:36.215 clat (usec): min=5078, max=17715, avg=13966.75, stdev=1253.82 00:21:36.215 lat (usec): min=5091, max=17726, avg=13979.59, stdev=1253.84 00:21:36.215 clat percentiles (usec): 00:21:36.215 | 1.00th=[ 8586], 5.00th=[12256], 10.00th=[12780], 20.00th=[13173], 00:21:36.215 | 30.00th=[13435], 40.00th=[13829], 50.00th=[14091], 60.00th=[14353], 00:21:36.215 | 70.00th=[14615], 80.00th=[14877], 90.00th=[15270], 95.00th=[15664], 00:21:36.215 | 99.00th=[16450], 99.50th=[16909], 99.90th=[17171], 99.95th=[17171], 00:21:36.215 | 99.99th=[17695] 00:21:36.215 bw ( KiB/s): min=25600, max=29184, per=34.05%, avg=27418.95, stdev=894.56, samples=19 00:21:36.215 iops : min= 200, max= 228, avg=214.21, stdev= 6.99, samples=19 00:21:36.215 lat (msec) : 10=1.72%, 20=98.28% 00:21:36.215 cpu : usr=92.55%, sys=5.91%, ctx=15, majf=0, minf=9 00:21:36.215 IO depths : 1=3.3%, 2=96.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:36.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.215 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.215 issued rwts: total=2146,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.215 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:36.215 filename0: (groupid=0, jobs=1): err= 0: pid=90965: Wed Apr 17 16:33:10 2024 00:21:36.215 read: IOPS=172, BW=21.5MiB/s (22.6MB/s)(215MiB/10005msec) 00:21:36.215 slat (nsec): min=7398, max=42760, avg=12888.61, stdev=3162.37 00:21:36.215 clat (usec): min=7623, max=22834, avg=17408.73, stdev=1201.29 00:21:36.215 lat (usec): min=7636, max=22859, avg=17421.62, stdev=1201.61 00:21:36.215 clat percentiles (usec): 00:21:36.215 | 1.00th=[10945], 5.00th=[16057], 10.00th=[16450], 20.00th=[16909], 00:21:36.215 | 30.00th=[17171], 40.00th=[17433], 50.00th=[17433], 60.00th=[17695], 00:21:36.215 | 70.00th=[17957], 80.00th=[18220], 90.00th=[18482], 95.00th=[18744], 00:21:36.215 | 99.00th=[19268], 99.50th=[19530], 99.90th=[22938], 99.95th=[22938], 00:21:36.215 | 99.99th=[22938] 00:21:36.215 bw ( KiB/s): min=21248, max=23040, per=27.39%, avg=22056.42, stdev=534.70, samples=19 00:21:36.215 iops : min= 166, max= 180, avg=172.32, stdev= 4.18, samples=19 00:21:36.215 lat (msec) : 10=0.06%, 20=99.65%, 50=0.29% 00:21:36.215 cpu : usr=92.91%, sys=5.75%, ctx=15, majf=0, minf=9 00:21:36.215 IO depths : 1=8.1%, 2=91.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:36.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.215 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.215 issued rwts: total=1722,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.215 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:36.215 filename0: (groupid=0, jobs=1): err= 0: pid=90966: Wed Apr 17 16:33:10 2024 00:21:36.215 read: IOPS=242, BW=30.3MiB/s (31.8MB/s)(303MiB/10006msec) 00:21:36.215 slat (nsec): min=4582, max=54836, avg=12867.82, stdev=2426.48 00:21:36.215 clat (usec): min=8917, max=54820, avg=12352.74, stdev=2591.78 00:21:36.215 lat (usec): min=8929, max=54834, avg=12365.61, stdev=2591.77 00:21:36.215 clat percentiles (usec): 00:21:36.215 | 1.00th=[10290], 5.00th=[10945], 10.00th=[11207], 20.00th=[11600], 00:21:36.215 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12256], 60.00th=[12387], 00:21:36.215 | 70.00th=[12649], 80.00th=[12780], 90.00th=[13042], 95.00th=[13304], 00:21:36.215 | 99.00th=[13829], 99.50th=[14353], 99.90th=[53740], 99.95th=[53740], 00:21:36.215 | 99.99th=[54789] 00:21:36.215 bw ( KiB/s): min=28416, max=32256, per=38.55%, avg=31043.37, stdev=990.90, samples=19 00:21:36.215 iops : min= 222, max= 252, avg=242.53, stdev= 7.74, samples=19 00:21:36.215 lat (msec) : 10=0.33%, 20=99.30%, 100=0.37% 00:21:36.215 cpu : usr=92.31%, sys=6.11%, ctx=9, majf=0, minf=0 00:21:36.215 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:36.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.215 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.215 issued rwts: total=2427,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.215 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:36.215 00:21:36.215 Run status group 0 (all jobs): 00:21:36.215 READ: bw=78.6MiB/s (82.5MB/s), 21.5MiB/s-30.3MiB/s (22.6MB/s-31.8MB/s), io=787MiB (825MB), run=10005-10006msec 00:21:36.474 16:33:10 -- target/dif.sh@132 -- # destroy_subsystems 0 00:21:36.474 16:33:10 -- target/dif.sh@43 -- # local sub 00:21:36.474 16:33:10 -- target/dif.sh@45 -- # for sub in "$@" 00:21:36.474 16:33:10 -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:36.474 16:33:10 -- target/dif.sh@36 -- # local sub_id=0 00:21:36.474 16:33:10 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:36.474 16:33:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:36.474 16:33:10 -- common/autotest_common.sh@10 -- # set +x 00:21:36.474 16:33:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:36.474 16:33:10 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:36.474 16:33:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:36.474 16:33:10 -- common/autotest_common.sh@10 -- # set +x 00:21:36.474 ************************************ 00:21:36.474 END TEST fio_dif_digest 00:21:36.474 ************************************ 00:21:36.474 16:33:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:36.474 00:21:36.474 real 0m11.016s 00:21:36.474 user 0m28.438s 00:21:36.474 sys 0m2.049s 00:21:36.474 16:33:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:36.474 16:33:10 -- common/autotest_common.sh@10 -- # set +x 00:21:36.474 16:33:10 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:21:36.474 16:33:10 -- target/dif.sh@147 -- # nvmftestfini 00:21:36.474 16:33:10 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:36.474 16:33:10 -- nvmf/common.sh@117 -- # sync 00:21:36.474 16:33:10 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:36.474 16:33:10 -- nvmf/common.sh@120 -- # set +e 00:21:36.474 16:33:10 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:36.474 16:33:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:36.474 rmmod nvme_tcp 00:21:36.733 rmmod nvme_fabrics 00:21:36.733 rmmod nvme_keyring 00:21:36.733 16:33:10 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:36.733 16:33:10 -- nvmf/common.sh@124 -- # set -e 00:21:36.733 16:33:10 -- nvmf/common.sh@125 -- # return 0 00:21:36.733 16:33:10 -- nvmf/common.sh@478 -- # '[' -n 90177 ']' 00:21:36.733 16:33:10 -- nvmf/common.sh@479 -- # killprocess 90177 00:21:36.733 16:33:10 -- common/autotest_common.sh@936 -- # '[' -z 90177 ']' 00:21:36.733 16:33:10 -- common/autotest_common.sh@940 -- # kill -0 90177 00:21:36.733 16:33:10 -- common/autotest_common.sh@941 -- # uname 00:21:36.733 16:33:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:36.733 16:33:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90177 00:21:36.733 killing process with pid 90177 00:21:36.733 16:33:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:36.733 16:33:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:36.733 16:33:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90177' 00:21:36.733 16:33:10 -- common/autotest_common.sh@955 -- # kill 90177 00:21:36.733 16:33:10 -- common/autotest_common.sh@960 -- # wait 90177 00:21:36.992 16:33:10 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:21:36.992 16:33:10 -- nvmf/common.sh@482 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:37.250 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:37.250 Waiting for block devices as requested 00:21:37.250 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:37.508 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:37.508 16:33:11 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:37.508 16:33:11 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:37.508 16:33:11 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:37.508 16:33:11 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:37.508 16:33:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.508 16:33:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:37.508 16:33:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.508 16:33:11 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:37.508 00:21:37.508 real 1m0.475s 00:21:37.508 user 3m52.661s 00:21:37.508 sys 0m14.976s 00:21:37.508 16:33:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:37.508 16:33:11 -- common/autotest_common.sh@10 -- # set +x 00:21:37.508 ************************************ 00:21:37.508 END TEST nvmf_dif 00:21:37.508 ************************************ 00:21:37.508 16:33:11 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:37.508 16:33:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:37.508 16:33:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:37.508 16:33:11 -- common/autotest_common.sh@10 -- # set +x 00:21:37.508 ************************************ 00:21:37.508 START TEST nvmf_abort_qd_sizes 00:21:37.508 ************************************ 00:21:37.508 16:33:11 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:37.767 * Looking for test storage... 00:21:37.767 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:37.767 16:33:11 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:37.767 16:33:11 -- nvmf/common.sh@7 -- # uname -s 00:21:37.767 16:33:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:37.767 16:33:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:37.767 16:33:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:37.767 16:33:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:37.767 16:33:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:37.767 16:33:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:37.767 16:33:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:37.767 16:33:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:37.767 16:33:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:37.767 16:33:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:37.767 16:33:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:21:37.767 16:33:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:21:37.767 16:33:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:37.767 16:33:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:37.767 16:33:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:37.767 16:33:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:37.767 16:33:11 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:37.767 16:33:11 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:37.767 16:33:11 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:37.767 16:33:11 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:37.767 16:33:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.767 16:33:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.767 16:33:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.767 16:33:11 -- paths/export.sh@5 -- # export PATH 00:21:37.767 16:33:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.767 16:33:11 -- nvmf/common.sh@47 -- # : 0 00:21:37.767 16:33:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:37.767 16:33:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:37.767 16:33:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:37.767 16:33:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:37.767 16:33:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:37.767 16:33:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:37.767 16:33:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:37.767 16:33:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:37.767 16:33:11 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:21:37.767 16:33:11 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:37.767 16:33:11 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:37.767 16:33:11 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:37.767 16:33:11 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:37.767 16:33:11 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:37.767 16:33:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.767 16:33:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:37.767 16:33:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.767 16:33:11 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:37.767 16:33:11 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:37.767 16:33:11 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:37.767 16:33:11 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:37.767 16:33:11 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:37.767 16:33:11 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:37.767 16:33:11 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:37.767 16:33:11 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:37.767 16:33:11 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:37.767 16:33:11 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:37.767 16:33:11 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:37.767 16:33:11 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:37.767 16:33:11 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:37.767 16:33:11 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:37.767 16:33:11 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:37.767 16:33:11 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:37.767 16:33:11 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:37.767 16:33:11 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:37.767 16:33:11 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:37.767 16:33:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:37.767 Cannot find device "nvmf_tgt_br" 00:21:37.767 16:33:11 -- nvmf/common.sh@155 -- # true 00:21:37.767 16:33:11 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:37.767 Cannot find device "nvmf_tgt_br2" 00:21:37.767 16:33:11 -- nvmf/common.sh@156 -- # true 00:21:37.767 16:33:11 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:37.767 16:33:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:37.767 Cannot find device "nvmf_tgt_br" 00:21:37.767 16:33:11 -- nvmf/common.sh@158 -- # true 00:21:37.767 16:33:11 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:37.767 Cannot find device "nvmf_tgt_br2" 00:21:37.767 16:33:11 -- nvmf/common.sh@159 -- # true 00:21:37.767 16:33:11 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:37.767 16:33:11 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:37.767 16:33:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:37.767 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:37.767 16:33:11 -- nvmf/common.sh@162 -- # true 00:21:37.767 16:33:11 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:37.768 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:37.768 16:33:11 -- nvmf/common.sh@163 -- # true 00:21:37.768 16:33:11 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:37.768 16:33:11 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:37.768 16:33:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:37.768 16:33:11 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:37.768 16:33:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:38.026 16:33:11 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:38.026 16:33:11 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:38.026 16:33:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:38.026 16:33:11 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:38.026 16:33:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:38.026 16:33:11 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:38.026 16:33:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:38.026 16:33:11 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:38.026 16:33:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:38.026 16:33:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:38.026 16:33:11 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:38.026 16:33:11 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:38.026 16:33:11 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:38.026 16:33:11 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:38.026 16:33:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:38.026 16:33:11 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:38.026 16:33:11 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:38.026 16:33:11 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:38.026 16:33:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:38.026 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:38.026 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:21:38.026 00:21:38.026 --- 10.0.0.2 ping statistics --- 00:21:38.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.026 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:21:38.026 16:33:11 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:38.026 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:38.026 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:21:38.026 00:21:38.026 --- 10.0.0.3 ping statistics --- 00:21:38.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.026 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:21:38.026 16:33:11 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:38.026 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:38.026 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:21:38.026 00:21:38.026 --- 10.0.0.1 ping statistics --- 00:21:38.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.026 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:21:38.026 16:33:11 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:38.026 16:33:11 -- nvmf/common.sh@422 -- # return 0 00:21:38.026 16:33:11 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:21:38.026 16:33:11 -- nvmf/common.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:38.592 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:38.892 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:38.892 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:38.892 16:33:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:38.892 16:33:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:38.892 16:33:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:38.892 16:33:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:38.892 16:33:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:38.892 16:33:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:38.892 16:33:12 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:21:38.892 16:33:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:38.892 16:33:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:38.892 16:33:12 -- common/autotest_common.sh@10 -- # set +x 00:21:38.892 16:33:12 -- nvmf/common.sh@470 -- # nvmfpid=91570 00:21:38.892 16:33:12 -- nvmf/common.sh@471 -- # waitforlisten 91570 00:21:38.892 16:33:12 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:21:38.892 16:33:12 -- common/autotest_common.sh@817 -- # '[' -z 91570 ']' 00:21:38.892 16:33:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.892 16:33:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:38.892 16:33:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.892 16:33:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:38.892 16:33:12 -- common/autotest_common.sh@10 -- # set +x 00:21:38.892 [2024-04-17 16:33:12.904569] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:21:38.892 [2024-04-17 16:33:12.905176] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:39.171 [2024-04-17 16:33:13.039428] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:39.171 [2024-04-17 16:33:13.156907] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:39.171 [2024-04-17 16:33:13.157090] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:39.171 [2024-04-17 16:33:13.157591] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:39.171 [2024-04-17 16:33:13.157839] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:39.171 [2024-04-17 16:33:13.158128] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:39.171 [2024-04-17 16:33:13.158407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:39.171 [2024-04-17 16:33:13.158536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:39.171 [2024-04-17 16:33:13.158614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.171 [2024-04-17 16:33:13.158615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:40.107 16:33:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:40.107 16:33:13 -- common/autotest_common.sh@850 -- # return 0 00:21:40.107 16:33:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:40.107 16:33:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:40.107 16:33:13 -- common/autotest_common.sh@10 -- # set +x 00:21:40.107 16:33:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:40.107 16:33:13 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:21:40.107 16:33:13 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:21:40.107 16:33:13 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:21:40.107 16:33:13 -- scripts/common.sh@309 -- # local bdf bdfs 00:21:40.107 16:33:13 -- scripts/common.sh@310 -- # local nvmes 00:21:40.107 16:33:13 -- scripts/common.sh@312 -- # [[ -n '' ]] 00:21:40.107 16:33:13 -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:21:40.107 16:33:13 -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:21:40.108 16:33:13 -- scripts/common.sh@295 -- # local bdf= 00:21:40.108 16:33:13 -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:21:40.108 16:33:13 -- scripts/common.sh@230 -- # local class 00:21:40.108 16:33:13 -- scripts/common.sh@231 -- # local subclass 00:21:40.108 16:33:13 -- scripts/common.sh@232 -- # local progif 00:21:40.108 16:33:13 -- scripts/common.sh@233 -- # printf %02x 1 00:21:40.108 16:33:13 -- scripts/common.sh@233 -- # class=01 00:21:40.108 16:33:13 -- scripts/common.sh@234 -- # printf %02x 8 00:21:40.108 16:33:13 -- scripts/common.sh@234 -- # subclass=08 00:21:40.108 16:33:13 -- scripts/common.sh@235 -- # printf %02x 2 00:21:40.108 16:33:13 -- scripts/common.sh@235 -- # progif=02 00:21:40.108 16:33:13 -- scripts/common.sh@237 -- # hash lspci 00:21:40.108 16:33:13 -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:21:40.108 16:33:13 -- scripts/common.sh@239 -- # lspci -mm -n -D 00:21:40.108 16:33:13 -- scripts/common.sh@240 -- # grep -i -- -p02 00:21:40.108 16:33:13 -- scripts/common.sh@242 -- # tr -d '"' 00:21:40.108 16:33:13 -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:21:40.108 16:33:13 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:40.108 16:33:13 -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:21:40.108 16:33:13 -- scripts/common.sh@15 -- # local i 00:21:40.108 16:33:13 -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:21:40.108 16:33:13 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:21:40.108 16:33:13 -- scripts/common.sh@24 -- # return 0 00:21:40.108 16:33:13 -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:21:40.108 16:33:13 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:40.108 16:33:13 -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:21:40.108 16:33:13 -- scripts/common.sh@15 -- # local i 00:21:40.108 16:33:13 -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:21:40.108 16:33:13 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:21:40.108 16:33:13 -- scripts/common.sh@24 -- # return 0 00:21:40.108 16:33:13 -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:21:40.108 16:33:13 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:21:40.108 16:33:13 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:21:40.108 16:33:13 -- scripts/common.sh@320 -- # uname -s 00:21:40.108 16:33:13 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:21:40.108 16:33:13 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:21:40.108 16:33:13 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:21:40.108 16:33:13 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:21:40.108 16:33:13 -- scripts/common.sh@320 -- # uname -s 00:21:40.108 16:33:13 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:21:40.108 16:33:13 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:21:40.108 16:33:13 -- scripts/common.sh@325 -- # (( 2 )) 00:21:40.108 16:33:13 -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:21:40.108 16:33:13 -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:21:40.108 16:33:13 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:21:40.108 16:33:13 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:21:40.108 16:33:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:40.108 16:33:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:40.108 16:33:13 -- common/autotest_common.sh@10 -- # set +x 00:21:40.108 ************************************ 00:21:40.108 START TEST spdk_target_abort 00:21:40.108 ************************************ 00:21:40.108 16:33:14 -- common/autotest_common.sh@1111 -- # spdk_target 00:21:40.108 16:33:14 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:21:40.108 16:33:14 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:21:40.108 16:33:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:40.108 16:33:14 -- common/autotest_common.sh@10 -- # set +x 00:21:40.108 spdk_targetn1 00:21:40.108 16:33:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:40.108 16:33:14 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:40.108 16:33:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:40.108 16:33:14 -- common/autotest_common.sh@10 -- # set +x 00:21:40.108 [2024-04-17 16:33:14.144125] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:40.108 16:33:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:40.367 16:33:14 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:21:40.367 16:33:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:40.367 16:33:14 -- common/autotest_common.sh@10 -- # set +x 00:21:40.367 16:33:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:40.367 16:33:14 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:21:40.367 16:33:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:40.367 16:33:14 -- common/autotest_common.sh@10 -- # set +x 00:21:40.367 16:33:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:40.367 16:33:14 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:21:40.367 16:33:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:40.367 16:33:14 -- common/autotest_common.sh@10 -- # set +x 00:21:40.367 [2024-04-17 16:33:14.172303] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:40.367 16:33:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:40.367 16:33:14 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:21:40.367 16:33:14 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:40.367 16:33:14 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:40.367 16:33:14 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:21:40.367 16:33:14 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:40.367 16:33:14 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:40.367 16:33:14 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:40.367 16:33:14 -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:40.367 16:33:14 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:40.367 16:33:14 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:40.367 16:33:14 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:40.367 16:33:14 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:40.367 16:33:14 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:40.367 16:33:14 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:40.367 16:33:14 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:21:40.367 16:33:14 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:40.367 16:33:14 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:40.367 16:33:14 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:40.367 16:33:14 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:40.367 16:33:14 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:40.367 16:33:14 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:43.652 Initializing NVMe Controllers 00:21:43.652 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:21:43.652 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:43.652 Initialization complete. Launching workers. 00:21:43.652 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11524, failed: 0 00:21:43.652 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1093, failed to submit 10431 00:21:43.652 success 791, unsuccess 302, failed 0 00:21:43.652 16:33:17 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:43.652 16:33:17 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:46.935 [2024-04-17 16:33:20.631841] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140f7f0 is same with the state(5) to be set 00:21:46.935 [2024-04-17 16:33:20.632551] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140f7f0 is same with the state(5) to be set 00:21:46.935 [2024-04-17 16:33:20.632663] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140f7f0 is same with the state(5) to be set 00:21:46.935 [2024-04-17 16:33:20.632737] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140f7f0 is same with the state(5) to be set 00:21:46.935 [2024-04-17 16:33:20.632822] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140f7f0 is same with the state(5) to be set 00:21:46.935 [2024-04-17 16:33:20.632901] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140f7f0 is same with the state(5) to be set 00:21:46.935 [2024-04-17 16:33:20.632966] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140f7f0 is same with the state(5) to be set 00:21:46.935 [2024-04-17 16:33:20.633028] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140f7f0 is same with the state(5) to be set 00:21:46.935 Initializing NVMe Controllers 00:21:46.935 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:21:46.935 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:46.935 Initialization complete. Launching workers. 00:21:46.935 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5980, failed: 0 00:21:46.935 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1292, failed to submit 4688 00:21:46.935 success 276, unsuccess 1016, failed 0 00:21:46.935 16:33:20 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:46.935 16:33:20 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:50.227 Initializing NVMe Controllers 00:21:50.227 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:21:50.227 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:50.227 Initialization complete. Launching workers. 00:21:50.227 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29624, failed: 0 00:21:50.227 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2590, failed to submit 27034 00:21:50.227 success 385, unsuccess 2205, failed 0 00:21:50.227 16:33:23 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:21:50.227 16:33:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.227 16:33:23 -- common/autotest_common.sh@10 -- # set +x 00:21:50.227 16:33:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.227 16:33:24 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:21:50.227 16:33:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.227 16:33:24 -- common/autotest_common.sh@10 -- # set +x 00:21:51.163 16:33:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:51.163 16:33:25 -- target/abort_qd_sizes.sh@61 -- # killprocess 91570 00:21:51.163 16:33:25 -- common/autotest_common.sh@936 -- # '[' -z 91570 ']' 00:21:51.163 16:33:25 -- common/autotest_common.sh@940 -- # kill -0 91570 00:21:51.163 16:33:25 -- common/autotest_common.sh@941 -- # uname 00:21:51.163 16:33:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:51.163 16:33:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91570 00:21:51.163 16:33:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:51.163 killing process with pid 91570 00:21:51.163 16:33:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:51.163 16:33:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91570' 00:21:51.163 16:33:25 -- common/autotest_common.sh@955 -- # kill 91570 00:21:51.163 16:33:25 -- common/autotest_common.sh@960 -- # wait 91570 00:21:51.423 00:21:51.423 real 0m11.272s 00:21:51.423 user 0m46.245s 00:21:51.423 sys 0m1.623s 00:21:51.423 16:33:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:51.423 16:33:25 -- common/autotest_common.sh@10 -- # set +x 00:21:51.423 ************************************ 00:21:51.423 END TEST spdk_target_abort 00:21:51.423 ************************************ 00:21:51.423 16:33:25 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:21:51.423 16:33:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:51.423 16:33:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:51.423 16:33:25 -- common/autotest_common.sh@10 -- # set +x 00:21:51.423 ************************************ 00:21:51.423 START TEST kernel_target_abort 00:21:51.423 ************************************ 00:21:51.423 16:33:25 -- common/autotest_common.sh@1111 -- # kernel_target 00:21:51.423 16:33:25 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:21:51.423 16:33:25 -- nvmf/common.sh@717 -- # local ip 00:21:51.423 16:33:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:51.423 16:33:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:51.423 16:33:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:51.423 16:33:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:51.423 16:33:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:21:51.423 16:33:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:51.423 16:33:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:21:51.423 16:33:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:21:51.423 16:33:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:21:51.423 16:33:25 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:21:51.423 16:33:25 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:21:51.423 16:33:25 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:21:51.423 16:33:25 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:51.423 16:33:25 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:51.423 16:33:25 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:51.423 16:33:25 -- nvmf/common.sh@628 -- # local block nvme 00:21:51.423 16:33:25 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:21:51.423 16:33:25 -- nvmf/common.sh@631 -- # modprobe nvmet 00:21:51.682 16:33:25 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:51.682 16:33:25 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:51.940 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:51.940 Waiting for block devices as requested 00:21:51.940 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:52.199 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:52.199 16:33:26 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:21:52.199 16:33:26 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:52.199 16:33:26 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:21:52.199 16:33:26 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:21:52.199 16:33:26 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:52.199 16:33:26 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:52.199 16:33:26 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:21:52.199 16:33:26 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:21:52.199 16:33:26 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:52.199 No valid GPT data, bailing 00:21:52.199 16:33:26 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:52.199 16:33:26 -- scripts/common.sh@391 -- # pt= 00:21:52.199 16:33:26 -- scripts/common.sh@392 -- # return 1 00:21:52.199 16:33:26 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:21:52.199 16:33:26 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:21:52.199 16:33:26 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:21:52.199 16:33:26 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:21:52.199 16:33:26 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:21:52.199 16:33:26 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:52.199 16:33:26 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:52.199 16:33:26 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:21:52.199 16:33:26 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:21:52.199 16:33:26 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:21:52.199 No valid GPT data, bailing 00:21:52.199 16:33:26 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:21:52.199 16:33:26 -- scripts/common.sh@391 -- # pt= 00:21:52.199 16:33:26 -- scripts/common.sh@392 -- # return 1 00:21:52.199 16:33:26 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:21:52.199 16:33:26 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:21:52.199 16:33:26 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:21:52.199 16:33:26 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:21:52.200 16:33:26 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:21:52.200 16:33:26 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:52.200 16:33:26 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:52.200 16:33:26 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:21:52.200 16:33:26 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:21:52.200 16:33:26 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:21:52.459 No valid GPT data, bailing 00:21:52.459 16:33:26 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:21:52.459 16:33:26 -- scripts/common.sh@391 -- # pt= 00:21:52.459 16:33:26 -- scripts/common.sh@392 -- # return 1 00:21:52.459 16:33:26 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:21:52.459 16:33:26 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:21:52.459 16:33:26 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:52.459 16:33:26 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:21:52.459 16:33:26 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:21:52.459 16:33:26 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:52.459 16:33:26 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:52.459 16:33:26 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:21:52.459 16:33:26 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:21:52.459 16:33:26 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:52.459 No valid GPT data, bailing 00:21:52.459 16:33:26 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:52.459 16:33:26 -- scripts/common.sh@391 -- # pt= 00:21:52.459 16:33:26 -- scripts/common.sh@392 -- # return 1 00:21:52.459 16:33:26 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:21:52.459 16:33:26 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:21:52.459 16:33:26 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:52.459 16:33:26 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:52.459 16:33:26 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:52.459 16:33:26 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:21:52.459 16:33:26 -- nvmf/common.sh@656 -- # echo 1 00:21:52.459 16:33:26 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:21:52.459 16:33:26 -- nvmf/common.sh@658 -- # echo 1 00:21:52.459 16:33:26 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:21:52.459 16:33:26 -- nvmf/common.sh@661 -- # echo tcp 00:21:52.459 16:33:26 -- nvmf/common.sh@662 -- # echo 4420 00:21:52.459 16:33:26 -- nvmf/common.sh@663 -- # echo ipv4 00:21:52.459 16:33:26 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:52.459 16:33:26 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d --hostid=35bbb10f-fc38-42ac-b909-033700c5e05d -a 10.0.0.1 -t tcp -s 4420 00:21:52.459 00:21:52.459 Discovery Log Number of Records 2, Generation counter 2 00:21:52.459 =====Discovery Log Entry 0====== 00:21:52.459 trtype: tcp 00:21:52.459 adrfam: ipv4 00:21:52.459 subtype: current discovery subsystem 00:21:52.459 treq: not specified, sq flow control disable supported 00:21:52.459 portid: 1 00:21:52.459 trsvcid: 4420 00:21:52.459 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:52.459 traddr: 10.0.0.1 00:21:52.459 eflags: none 00:21:52.459 sectype: none 00:21:52.459 =====Discovery Log Entry 1====== 00:21:52.459 trtype: tcp 00:21:52.459 adrfam: ipv4 00:21:52.459 subtype: nvme subsystem 00:21:52.459 treq: not specified, sq flow control disable supported 00:21:52.459 portid: 1 00:21:52.459 trsvcid: 4420 00:21:52.459 subnqn: nqn.2016-06.io.spdk:testnqn 00:21:52.459 traddr: 10.0.0.1 00:21:52.459 eflags: none 00:21:52.459 sectype: none 00:21:52.459 16:33:26 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:21:52.459 16:33:26 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:52.459 16:33:26 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:52.459 16:33:26 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:21:52.459 16:33:26 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:52.459 16:33:26 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:52.459 16:33:26 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:52.459 16:33:26 -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:52.459 16:33:26 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:52.459 16:33:26 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:52.459 16:33:26 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:52.459 16:33:26 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:52.459 16:33:26 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:52.459 16:33:26 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:52.459 16:33:26 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:21:52.459 16:33:26 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:52.459 16:33:26 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:21:52.459 16:33:26 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:52.459 16:33:26 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:52.459 16:33:26 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:52.459 16:33:26 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:55.763 Initializing NVMe Controllers 00:21:55.763 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:55.763 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:55.763 Initialization complete. Launching workers. 00:21:55.763 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 32933, failed: 0 00:21:55.763 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32933, failed to submit 0 00:21:55.763 success 0, unsuccess 32933, failed 0 00:21:55.763 16:33:29 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:55.763 16:33:29 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:59.049 Initializing NVMe Controllers 00:21:59.049 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:59.049 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:59.049 Initialization complete. Launching workers. 00:21:59.049 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 68646, failed: 0 00:21:59.049 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29120, failed to submit 39526 00:21:59.049 success 0, unsuccess 29120, failed 0 00:21:59.049 16:33:32 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:59.049 16:33:32 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:02.335 Initializing NVMe Controllers 00:22:02.335 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:02.335 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:02.335 Initialization complete. Launching workers. 00:22:02.335 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 81110, failed: 0 00:22:02.335 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20250, failed to submit 60860 00:22:02.335 success 0, unsuccess 20250, failed 0 00:22:02.335 16:33:35 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:22:02.335 16:33:35 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:22:02.335 16:33:35 -- nvmf/common.sh@675 -- # echo 0 00:22:02.335 16:33:35 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:02.335 16:33:35 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:02.335 16:33:35 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:02.335 16:33:35 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:02.335 16:33:35 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:22:02.335 16:33:35 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:22:02.335 16:33:35 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:02.901 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:04.799 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:04.799 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:04.799 ************************************ 00:22:04.799 END TEST kernel_target_abort 00:22:04.799 ************************************ 00:22:04.799 00:22:04.799 real 0m13.142s 00:22:04.799 user 0m5.942s 00:22:04.799 sys 0m4.471s 00:22:04.799 16:33:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:04.799 16:33:38 -- common/autotest_common.sh@10 -- # set +x 00:22:04.799 16:33:38 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:04.799 16:33:38 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:22:04.799 16:33:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:04.799 16:33:38 -- nvmf/common.sh@117 -- # sync 00:22:04.799 16:33:38 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:04.799 16:33:38 -- nvmf/common.sh@120 -- # set +e 00:22:04.799 16:33:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:04.799 16:33:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:04.799 rmmod nvme_tcp 00:22:04.799 rmmod nvme_fabrics 00:22:04.799 rmmod nvme_keyring 00:22:04.799 16:33:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:04.799 16:33:38 -- nvmf/common.sh@124 -- # set -e 00:22:04.799 16:33:38 -- nvmf/common.sh@125 -- # return 0 00:22:04.799 16:33:38 -- nvmf/common.sh@478 -- # '[' -n 91570 ']' 00:22:04.799 16:33:38 -- nvmf/common.sh@479 -- # killprocess 91570 00:22:04.799 16:33:38 -- common/autotest_common.sh@936 -- # '[' -z 91570 ']' 00:22:04.799 16:33:38 -- common/autotest_common.sh@940 -- # kill -0 91570 00:22:04.799 Process with pid 91570 is not found 00:22:04.799 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (91570) - No such process 00:22:04.799 16:33:38 -- common/autotest_common.sh@963 -- # echo 'Process with pid 91570 is not found' 00:22:04.799 16:33:38 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:22:04.799 16:33:38 -- nvmf/common.sh@482 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:05.058 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:05.058 Waiting for block devices as requested 00:22:05.316 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:05.316 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:05.316 16:33:39 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:05.316 16:33:39 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:05.316 16:33:39 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:05.316 16:33:39 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:05.316 16:33:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.316 16:33:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:05.316 16:33:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.316 16:33:39 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:05.316 00:22:05.316 real 0m27.813s 00:22:05.316 user 0m53.449s 00:22:05.316 sys 0m7.500s 00:22:05.316 16:33:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:05.316 16:33:39 -- common/autotest_common.sh@10 -- # set +x 00:22:05.316 ************************************ 00:22:05.316 END TEST nvmf_abort_qd_sizes 00:22:05.316 ************************************ 00:22:05.573 16:33:39 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:22:05.573 16:33:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:05.573 16:33:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:05.573 16:33:39 -- common/autotest_common.sh@10 -- # set +x 00:22:05.573 ************************************ 00:22:05.573 START TEST keyring_file 00:22:05.573 ************************************ 00:22:05.573 16:33:39 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:22:05.573 * Looking for test storage... 00:22:05.573 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:22:05.573 16:33:39 -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:22:05.573 16:33:39 -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:05.573 16:33:39 -- nvmf/common.sh@7 -- # uname -s 00:22:05.573 16:33:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:05.573 16:33:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:05.573 16:33:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:05.573 16:33:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:05.573 16:33:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:05.573 16:33:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:05.573 16:33:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:05.573 16:33:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:05.573 16:33:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:05.573 16:33:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:05.573 16:33:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:35bbb10f-fc38-42ac-b909-033700c5e05d 00:22:05.573 16:33:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=35bbb10f-fc38-42ac-b909-033700c5e05d 00:22:05.573 16:33:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:05.573 16:33:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:05.573 16:33:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:05.573 16:33:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:05.573 16:33:39 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:05.573 16:33:39 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:05.573 16:33:39 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:05.573 16:33:39 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:05.573 16:33:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.573 16:33:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.573 16:33:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.573 16:33:39 -- paths/export.sh@5 -- # export PATH 00:22:05.573 16:33:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.573 16:33:39 -- nvmf/common.sh@47 -- # : 0 00:22:05.573 16:33:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:05.573 16:33:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:05.573 16:33:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:05.573 16:33:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:05.573 16:33:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:05.573 16:33:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:05.573 16:33:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:05.573 16:33:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:05.573 16:33:39 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:22:05.573 16:33:39 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:22:05.573 16:33:39 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:22:05.573 16:33:39 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:22:05.573 16:33:39 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:22:05.573 16:33:39 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:22:05.573 16:33:39 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:22:05.573 16:33:39 -- keyring/common.sh@15 -- # local name key digest path 00:22:05.573 16:33:39 -- keyring/common.sh@17 -- # name=key0 00:22:05.573 16:33:39 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:22:05.573 16:33:39 -- keyring/common.sh@17 -- # digest=0 00:22:05.573 16:33:39 -- keyring/common.sh@18 -- # mktemp 00:22:05.573 16:33:39 -- keyring/common.sh@18 -- # path=/tmp/tmp.MtykEn6PNZ 00:22:05.573 16:33:39 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:22:05.573 16:33:39 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:22:05.573 16:33:39 -- nvmf/common.sh@691 -- # local prefix key digest 00:22:05.573 16:33:39 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:22:05.573 16:33:39 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:22:05.573 16:33:39 -- nvmf/common.sh@693 -- # digest=0 00:22:05.573 16:33:39 -- nvmf/common.sh@694 -- # python - 00:22:05.831 16:33:39 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.MtykEn6PNZ 00:22:05.831 16:33:39 -- keyring/common.sh@23 -- # echo /tmp/tmp.MtykEn6PNZ 00:22:05.831 16:33:39 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.MtykEn6PNZ 00:22:05.831 16:33:39 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:22:05.831 16:33:39 -- keyring/common.sh@15 -- # local name key digest path 00:22:05.831 16:33:39 -- keyring/common.sh@17 -- # name=key1 00:22:05.831 16:33:39 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:22:05.831 16:33:39 -- keyring/common.sh@17 -- # digest=0 00:22:05.831 16:33:39 -- keyring/common.sh@18 -- # mktemp 00:22:05.831 16:33:39 -- keyring/common.sh@18 -- # path=/tmp/tmp.Ys3wl2ldW2 00:22:05.831 16:33:39 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:22:05.831 16:33:39 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:22:05.831 16:33:39 -- nvmf/common.sh@691 -- # local prefix key digest 00:22:05.831 16:33:39 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:22:05.831 16:33:39 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:22:05.831 16:33:39 -- nvmf/common.sh@693 -- # digest=0 00:22:05.831 16:33:39 -- nvmf/common.sh@694 -- # python - 00:22:05.831 16:33:39 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Ys3wl2ldW2 00:22:05.831 16:33:39 -- keyring/common.sh@23 -- # echo /tmp/tmp.Ys3wl2ldW2 00:22:05.831 16:33:39 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.Ys3wl2ldW2 00:22:05.831 16:33:39 -- keyring/file.sh@30 -- # tgtpid=92481 00:22:05.831 16:33:39 -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:05.831 16:33:39 -- keyring/file.sh@32 -- # waitforlisten 92481 00:22:05.831 16:33:39 -- common/autotest_common.sh@817 -- # '[' -z 92481 ']' 00:22:05.831 16:33:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.831 16:33:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:05.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.831 16:33:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.831 16:33:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:05.831 16:33:39 -- common/autotest_common.sh@10 -- # set +x 00:22:05.831 [2024-04-17 16:33:39.778489] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:22:05.831 [2024-04-17 16:33:39.778757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92481 ] 00:22:06.090 [2024-04-17 16:33:39.917818] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.090 [2024-04-17 16:33:40.047263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.026 16:33:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:07.026 16:33:40 -- common/autotest_common.sh@850 -- # return 0 00:22:07.026 16:33:40 -- keyring/file.sh@33 -- # rpc_cmd 00:22:07.026 16:33:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:07.026 16:33:40 -- common/autotest_common.sh@10 -- # set +x 00:22:07.026 [2024-04-17 16:33:40.817841] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:07.026 null0 00:22:07.026 [2024-04-17 16:33:40.849792] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:07.026 [2024-04-17 16:33:40.850175] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:22:07.026 [2024-04-17 16:33:40.857796] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:07.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:07.026 16:33:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:07.026 16:33:40 -- keyring/file.sh@43 -- # bperfpid=92513 00:22:07.026 16:33:40 -- keyring/file.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:22:07.026 16:33:40 -- keyring/file.sh@45 -- # waitforlisten 92513 /var/tmp/bperf.sock 00:22:07.026 16:33:40 -- common/autotest_common.sh@817 -- # '[' -z 92513 ']' 00:22:07.026 16:33:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:07.026 16:33:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:07.026 16:33:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:07.026 16:33:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:07.026 16:33:40 -- common/autotest_common.sh@10 -- # set +x 00:22:07.026 [2024-04-17 16:33:40.916185] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:22:07.026 [2024-04-17 16:33:40.916562] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92513 ] 00:22:07.026 [2024-04-17 16:33:41.058390] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.286 [2024-04-17 16:33:41.192040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:08.221 16:33:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:08.221 16:33:41 -- common/autotest_common.sh@850 -- # return 0 00:22:08.221 16:33:41 -- keyring/file.sh@46 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.MtykEn6PNZ 00:22:08.221 16:33:41 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.MtykEn6PNZ 00:22:08.221 16:33:42 -- keyring/file.sh@47 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Ys3wl2ldW2 00:22:08.221 16:33:42 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Ys3wl2ldW2 00:22:08.480 16:33:42 -- keyring/file.sh@48 -- # get_key key0 00:22:08.480 16:33:42 -- keyring/file.sh@48 -- # jq -r .path 00:22:08.480 16:33:42 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:08.480 16:33:42 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:08.480 16:33:42 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:09.046 16:33:42 -- keyring/file.sh@48 -- # [[ /tmp/tmp.MtykEn6PNZ == \/\t\m\p\/\t\m\p\.\M\t\y\k\E\n\6\P\N\Z ]] 00:22:09.046 16:33:42 -- keyring/file.sh@49 -- # get_key key1 00:22:09.046 16:33:42 -- keyring/file.sh@49 -- # jq -r .path 00:22:09.046 16:33:42 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:09.046 16:33:42 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:09.046 16:33:42 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:09.369 16:33:43 -- keyring/file.sh@49 -- # [[ /tmp/tmp.Ys3wl2ldW2 == \/\t\m\p\/\t\m\p\.\Y\s\3\w\l\2\l\d\W\2 ]] 00:22:09.369 16:33:43 -- keyring/file.sh@50 -- # get_refcnt key0 00:22:09.369 16:33:43 -- keyring/common.sh@12 -- # get_key key0 00:22:09.369 16:33:43 -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:09.369 16:33:43 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:09.369 16:33:43 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:09.369 16:33:43 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:09.628 16:33:43 -- keyring/file.sh@50 -- # (( 1 == 1 )) 00:22:09.628 16:33:43 -- keyring/file.sh@51 -- # get_refcnt key1 00:22:09.628 16:33:43 -- keyring/common.sh@12 -- # get_key key1 00:22:09.628 16:33:43 -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:09.628 16:33:43 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:09.628 16:33:43 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:09.628 16:33:43 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:09.886 16:33:43 -- keyring/file.sh@51 -- # (( 1 == 1 )) 00:22:09.886 16:33:43 -- keyring/file.sh@54 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:09.886 16:33:43 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:10.145 [2024-04-17 16:33:44.023649] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:10.145 nvme0n1 00:22:10.145 16:33:44 -- keyring/file.sh@56 -- # get_refcnt key0 00:22:10.145 16:33:44 -- keyring/common.sh@12 -- # get_key key0 00:22:10.145 16:33:44 -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:10.145 16:33:44 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:10.145 16:33:44 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:10.145 16:33:44 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:10.404 16:33:44 -- keyring/file.sh@56 -- # (( 2 == 2 )) 00:22:10.404 16:33:44 -- keyring/file.sh@57 -- # get_refcnt key1 00:22:10.404 16:33:44 -- keyring/common.sh@12 -- # get_key key1 00:22:10.404 16:33:44 -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:10.404 16:33:44 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:10.404 16:33:44 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:10.404 16:33:44 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:10.662 16:33:44 -- keyring/file.sh@57 -- # (( 1 == 1 )) 00:22:10.662 16:33:44 -- keyring/file.sh@59 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:10.921 Running I/O for 1 seconds... 00:22:11.857 00:22:11.857 Latency(us) 00:22:11.857 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:11.857 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:22:11.857 nvme0n1 : 1.01 11065.06 43.22 0.00 0.00 11530.40 3842.79 17039.36 00:22:11.857 =================================================================================================================== 00:22:11.857 Total : 11065.06 43.22 0.00 0.00 11530.40 3842.79 17039.36 00:22:11.857 0 00:22:11.857 16:33:45 -- keyring/file.sh@61 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:22:11.857 16:33:45 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:22:12.423 16:33:46 -- keyring/file.sh@62 -- # get_refcnt key0 00:22:12.423 16:33:46 -- keyring/common.sh@12 -- # get_key key0 00:22:12.423 16:33:46 -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:12.423 16:33:46 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:12.423 16:33:46 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:12.423 16:33:46 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:12.681 16:33:46 -- keyring/file.sh@62 -- # (( 1 == 1 )) 00:22:12.681 16:33:46 -- keyring/file.sh@63 -- # get_refcnt key1 00:22:12.681 16:33:46 -- keyring/common.sh@12 -- # get_key key1 00:22:12.681 16:33:46 -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:12.681 16:33:46 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:12.681 16:33:46 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:12.681 16:33:46 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:12.939 16:33:46 -- keyring/file.sh@63 -- # (( 1 == 1 )) 00:22:12.939 16:33:46 -- keyring/file.sh@66 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:12.939 16:33:46 -- common/autotest_common.sh@638 -- # local es=0 00:22:12.939 16:33:46 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:12.939 16:33:46 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:22:12.939 16:33:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:12.939 16:33:46 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:22:12.939 16:33:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:12.939 16:33:46 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:12.939 16:33:46 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:13.197 [2024-04-17 16:33:47.166848] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:13.197 [2024-04-17 16:33:47.167452] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x194e670 (107): Transport endpoint is not connected 00:22:13.197 [2024-04-17 16:33:47.168429] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x194e670 (9): Bad file descriptor 00:22:13.197 [2024-04-17 16:33:47.169423] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:13.197 [2024-04-17 16:33:47.169477] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:22:13.197 [2024-04-17 16:33:47.169499] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:13.197 2024/04/17 16:33:47 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:22:13.197 request: 00:22:13.197 { 00:22:13.197 "method": "bdev_nvme_attach_controller", 00:22:13.197 "params": { 00:22:13.197 "name": "nvme0", 00:22:13.197 "trtype": "tcp", 00:22:13.197 "traddr": "127.0.0.1", 00:22:13.197 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:13.197 "adrfam": "ipv4", 00:22:13.197 "trsvcid": "4420", 00:22:13.197 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:13.197 "psk": "key1" 00:22:13.197 } 00:22:13.197 } 00:22:13.197 Got JSON-RPC error response 00:22:13.197 GoRPCClient: error on JSON-RPC call 00:22:13.197 16:33:47 -- common/autotest_common.sh@641 -- # es=1 00:22:13.197 16:33:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:13.197 16:33:47 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:13.197 16:33:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:13.197 16:33:47 -- keyring/file.sh@68 -- # get_refcnt key0 00:22:13.197 16:33:47 -- keyring/common.sh@12 -- # get_key key0 00:22:13.197 16:33:47 -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:13.197 16:33:47 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:13.197 16:33:47 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:13.197 16:33:47 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:13.761 16:33:47 -- keyring/file.sh@68 -- # (( 1 == 1 )) 00:22:13.761 16:33:47 -- keyring/file.sh@69 -- # get_refcnt key1 00:22:13.761 16:33:47 -- keyring/common.sh@12 -- # get_key key1 00:22:13.761 16:33:47 -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:13.761 16:33:47 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:13.761 16:33:47 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:13.761 16:33:47 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:14.019 16:33:47 -- keyring/file.sh@69 -- # (( 1 == 1 )) 00:22:14.019 16:33:47 -- keyring/file.sh@72 -- # bperf_cmd keyring_file_remove_key key0 00:22:14.019 16:33:47 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:22:14.277 16:33:48 -- keyring/file.sh@73 -- # bperf_cmd keyring_file_remove_key key1 00:22:14.277 16:33:48 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:22:14.534 16:33:48 -- keyring/file.sh@74 -- # bperf_cmd keyring_get_keys 00:22:14.534 16:33:48 -- keyring/file.sh@74 -- # jq length 00:22:14.534 16:33:48 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:14.791 16:33:48 -- keyring/file.sh@74 -- # (( 0 == 0 )) 00:22:14.791 16:33:48 -- keyring/file.sh@77 -- # chmod 0660 /tmp/tmp.MtykEn6PNZ 00:22:14.791 16:33:48 -- keyring/file.sh@78 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.MtykEn6PNZ 00:22:14.791 16:33:48 -- common/autotest_common.sh@638 -- # local es=0 00:22:14.791 16:33:48 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.MtykEn6PNZ 00:22:14.791 16:33:48 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:22:14.791 16:33:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:14.791 16:33:48 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:22:14.791 16:33:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:14.791 16:33:48 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.MtykEn6PNZ 00:22:14.791 16:33:48 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.MtykEn6PNZ 00:22:15.049 [2024-04-17 16:33:48.989920] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.MtykEn6PNZ': 0100660 00:22:15.049 [2024-04-17 16:33:48.989980] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:15.049 2024/04/17 16:33:48 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.MtykEn6PNZ], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:22:15.049 request: 00:22:15.049 { 00:22:15.049 "method": "keyring_file_add_key", 00:22:15.049 "params": { 00:22:15.049 "name": "key0", 00:22:15.049 "path": "/tmp/tmp.MtykEn6PNZ" 00:22:15.049 } 00:22:15.049 } 00:22:15.049 Got JSON-RPC error response 00:22:15.049 GoRPCClient: error on JSON-RPC call 00:22:15.049 16:33:49 -- common/autotest_common.sh@641 -- # es=1 00:22:15.049 16:33:49 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:15.049 16:33:49 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:15.049 16:33:49 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:15.049 16:33:49 -- keyring/file.sh@81 -- # chmod 0600 /tmp/tmp.MtykEn6PNZ 00:22:15.049 16:33:49 -- keyring/file.sh@82 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.MtykEn6PNZ 00:22:15.049 16:33:49 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.MtykEn6PNZ 00:22:15.306 16:33:49 -- keyring/file.sh@83 -- # rm -f /tmp/tmp.MtykEn6PNZ 00:22:15.306 16:33:49 -- keyring/file.sh@85 -- # get_refcnt key0 00:22:15.306 16:33:49 -- keyring/common.sh@12 -- # get_key key0 00:22:15.307 16:33:49 -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:15.307 16:33:49 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:15.307 16:33:49 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:15.307 16:33:49 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:15.872 16:33:49 -- keyring/file.sh@85 -- # (( 1 == 1 )) 00:22:15.872 16:33:49 -- keyring/file.sh@87 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:15.872 16:33:49 -- common/autotest_common.sh@638 -- # local es=0 00:22:15.872 16:33:49 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:15.872 16:33:49 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:22:15.872 16:33:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:15.872 16:33:49 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:22:15.872 16:33:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:15.872 16:33:49 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:15.872 16:33:49 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:16.129 [2024-04-17 16:33:49.922164] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.MtykEn6PNZ': No such file or directory 00:22:16.129 [2024-04-17 16:33:49.922240] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:22:16.129 [2024-04-17 16:33:49.922273] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:22:16.129 [2024-04-17 16:33:49.922286] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:16.129 [2024-04-17 16:33:49.922297] bdev_nvme.c:6183:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:22:16.129 2024/04/17 16:33:49 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:22:16.129 request: 00:22:16.129 { 00:22:16.129 "method": "bdev_nvme_attach_controller", 00:22:16.129 "params": { 00:22:16.129 "name": "nvme0", 00:22:16.129 "trtype": "tcp", 00:22:16.129 "traddr": "127.0.0.1", 00:22:16.129 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:16.129 "adrfam": "ipv4", 00:22:16.129 "trsvcid": "4420", 00:22:16.129 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:16.129 "psk": "key0" 00:22:16.129 } 00:22:16.129 } 00:22:16.129 Got JSON-RPC error response 00:22:16.129 GoRPCClient: error on JSON-RPC call 00:22:16.129 16:33:49 -- common/autotest_common.sh@641 -- # es=1 00:22:16.129 16:33:49 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:16.129 16:33:49 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:16.129 16:33:49 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:16.129 16:33:49 -- keyring/file.sh@89 -- # bperf_cmd keyring_file_remove_key key0 00:22:16.129 16:33:49 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:22:16.387 16:33:50 -- keyring/file.sh@92 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:22:16.387 16:33:50 -- keyring/common.sh@15 -- # local name key digest path 00:22:16.387 16:33:50 -- keyring/common.sh@17 -- # name=key0 00:22:16.387 16:33:50 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:22:16.387 16:33:50 -- keyring/common.sh@17 -- # digest=0 00:22:16.387 16:33:50 -- keyring/common.sh@18 -- # mktemp 00:22:16.387 16:33:50 -- keyring/common.sh@18 -- # path=/tmp/tmp.d3lWA0ZnIi 00:22:16.387 16:33:50 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:22:16.387 16:33:50 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:22:16.387 16:33:50 -- nvmf/common.sh@691 -- # local prefix key digest 00:22:16.387 16:33:50 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:22:16.387 16:33:50 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:22:16.387 16:33:50 -- nvmf/common.sh@693 -- # digest=0 00:22:16.387 16:33:50 -- nvmf/common.sh@694 -- # python - 00:22:16.387 16:33:50 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.d3lWA0ZnIi 00:22:16.387 16:33:50 -- keyring/common.sh@23 -- # echo /tmp/tmp.d3lWA0ZnIi 00:22:16.387 16:33:50 -- keyring/file.sh@92 -- # key0path=/tmp/tmp.d3lWA0ZnIi 00:22:16.387 16:33:50 -- keyring/file.sh@93 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.d3lWA0ZnIi 00:22:16.387 16:33:50 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.d3lWA0ZnIi 00:22:16.645 16:33:50 -- keyring/file.sh@94 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:16.645 16:33:50 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:17.210 nvme0n1 00:22:17.210 16:33:51 -- keyring/file.sh@96 -- # get_refcnt key0 00:22:17.210 16:33:51 -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:17.210 16:33:51 -- keyring/common.sh@12 -- # get_key key0 00:22:17.210 16:33:51 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:17.210 16:33:51 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:17.210 16:33:51 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:17.468 16:33:51 -- keyring/file.sh@96 -- # (( 2 == 2 )) 00:22:17.468 16:33:51 -- keyring/file.sh@97 -- # bperf_cmd keyring_file_remove_key key0 00:22:17.468 16:33:51 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:22:17.726 16:33:51 -- keyring/file.sh@98 -- # get_key key0 00:22:17.726 16:33:51 -- keyring/file.sh@98 -- # jq -r .removed 00:22:17.726 16:33:51 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:17.726 16:33:51 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:17.726 16:33:51 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:17.984 16:33:51 -- keyring/file.sh@98 -- # [[ true == \t\r\u\e ]] 00:22:17.984 16:33:51 -- keyring/file.sh@99 -- # get_refcnt key0 00:22:17.984 16:33:51 -- keyring/common.sh@12 -- # get_key key0 00:22:17.984 16:33:51 -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:17.984 16:33:51 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:17.984 16:33:51 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:17.984 16:33:51 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:18.243 16:33:52 -- keyring/file.sh@99 -- # (( 1 == 1 )) 00:22:18.243 16:33:52 -- keyring/file.sh@100 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:22:18.243 16:33:52 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:22:18.501 16:33:52 -- keyring/file.sh@101 -- # bperf_cmd keyring_get_keys 00:22:18.501 16:33:52 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:18.501 16:33:52 -- keyring/file.sh@101 -- # jq length 00:22:18.762 16:33:52 -- keyring/file.sh@101 -- # (( 0 == 0 )) 00:22:18.762 16:33:52 -- keyring/file.sh@104 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.d3lWA0ZnIi 00:22:18.762 16:33:52 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.d3lWA0ZnIi 00:22:19.032 16:33:52 -- keyring/file.sh@105 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Ys3wl2ldW2 00:22:19.032 16:33:52 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Ys3wl2ldW2 00:22:19.032 16:33:53 -- keyring/file.sh@106 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:19.032 16:33:53 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:19.597 nvme0n1 00:22:19.597 16:33:53 -- keyring/file.sh@109 -- # bperf_cmd save_config 00:22:19.597 16:33:53 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:22:19.856 16:33:53 -- keyring/file.sh@109 -- # config='{ 00:22:19.856 "subsystems": [ 00:22:19.856 { 00:22:19.856 "subsystem": "keyring", 00:22:19.856 "config": [ 00:22:19.856 { 00:22:19.856 "method": "keyring_file_add_key", 00:22:19.856 "params": { 00:22:19.856 "name": "key0", 00:22:19.856 "path": "/tmp/tmp.d3lWA0ZnIi" 00:22:19.856 } 00:22:19.856 }, 00:22:19.856 { 00:22:19.856 "method": "keyring_file_add_key", 00:22:19.856 "params": { 00:22:19.856 "name": "key1", 00:22:19.856 "path": "/tmp/tmp.Ys3wl2ldW2" 00:22:19.856 } 00:22:19.856 } 00:22:19.856 ] 00:22:19.856 }, 00:22:19.856 { 00:22:19.856 "subsystem": "iobuf", 00:22:19.856 "config": [ 00:22:19.856 { 00:22:19.856 "method": "iobuf_set_options", 00:22:19.856 "params": { 00:22:19.856 "large_bufsize": 135168, 00:22:19.856 "large_pool_count": 1024, 00:22:19.856 "small_bufsize": 8192, 00:22:19.856 "small_pool_count": 8192 00:22:19.856 } 00:22:19.856 } 00:22:19.856 ] 00:22:19.856 }, 00:22:19.856 { 00:22:19.856 "subsystem": "sock", 00:22:19.856 "config": [ 00:22:19.856 { 00:22:19.856 "method": "sock_impl_set_options", 00:22:19.856 "params": { 00:22:19.856 "enable_ktls": false, 00:22:19.856 "enable_placement_id": 0, 00:22:19.856 "enable_quickack": false, 00:22:19.856 "enable_recv_pipe": true, 00:22:19.856 "enable_zerocopy_send_client": false, 00:22:19.856 "enable_zerocopy_send_server": true, 00:22:19.856 "impl_name": "posix", 00:22:19.856 "recv_buf_size": 2097152, 00:22:19.856 "send_buf_size": 2097152, 00:22:19.856 "tls_version": 0, 00:22:19.856 "zerocopy_threshold": 0 00:22:19.856 } 00:22:19.856 }, 00:22:19.856 { 00:22:19.856 "method": "sock_impl_set_options", 00:22:19.856 "params": { 00:22:19.856 "enable_ktls": false, 00:22:19.856 "enable_placement_id": 0, 00:22:19.856 "enable_quickack": false, 00:22:19.856 "enable_recv_pipe": true, 00:22:19.856 "enable_zerocopy_send_client": false, 00:22:19.856 "enable_zerocopy_send_server": true, 00:22:19.856 "impl_name": "ssl", 00:22:19.856 "recv_buf_size": 4096, 00:22:19.856 "send_buf_size": 4096, 00:22:19.856 "tls_version": 0, 00:22:19.856 "zerocopy_threshold": 0 00:22:19.856 } 00:22:19.856 } 00:22:19.856 ] 00:22:19.856 }, 00:22:19.856 { 00:22:19.856 "subsystem": "vmd", 00:22:19.856 "config": [] 00:22:19.856 }, 00:22:19.856 { 00:22:19.856 "subsystem": "accel", 00:22:19.856 "config": [ 00:22:19.856 { 00:22:19.856 "method": "accel_set_options", 00:22:19.856 "params": { 00:22:19.856 "buf_count": 2048, 00:22:19.856 "large_cache_size": 16, 00:22:19.856 "sequence_count": 2048, 00:22:19.856 "small_cache_size": 128, 00:22:19.856 "task_count": 2048 00:22:19.856 } 00:22:19.856 } 00:22:19.856 ] 00:22:19.856 }, 00:22:19.856 { 00:22:19.856 "subsystem": "bdev", 00:22:19.856 "config": [ 00:22:19.856 { 00:22:19.856 "method": "bdev_set_options", 00:22:19.856 "params": { 00:22:19.856 "bdev_auto_examine": true, 00:22:19.856 "bdev_io_cache_size": 256, 00:22:19.856 "bdev_io_pool_size": 65535, 00:22:19.856 "iobuf_large_cache_size": 16, 00:22:19.856 "iobuf_small_cache_size": 128 00:22:19.856 } 00:22:19.856 }, 00:22:19.856 { 00:22:19.856 "method": "bdev_raid_set_options", 00:22:19.856 "params": { 00:22:19.856 "process_window_size_kb": 1024 00:22:19.856 } 00:22:19.856 }, 00:22:19.856 { 00:22:19.856 "method": "bdev_iscsi_set_options", 00:22:19.856 "params": { 00:22:19.856 "timeout_sec": 30 00:22:19.856 } 00:22:19.856 }, 00:22:19.856 { 00:22:19.856 "method": "bdev_nvme_set_options", 00:22:19.856 "params": { 00:22:19.856 "action_on_timeout": "none", 00:22:19.856 "allow_accel_sequence": false, 00:22:19.856 "arbitration_burst": 0, 00:22:19.856 "bdev_retry_count": 3, 00:22:19.856 "ctrlr_loss_timeout_sec": 0, 00:22:19.856 "delay_cmd_submit": true, 00:22:19.856 "dhchap_dhgroups": [ 00:22:19.856 "null", 00:22:19.856 "ffdhe2048", 00:22:19.856 "ffdhe3072", 00:22:19.856 "ffdhe4096", 00:22:19.856 "ffdhe6144", 00:22:19.856 "ffdhe8192" 00:22:19.856 ], 00:22:19.856 "dhchap_digests": [ 00:22:19.856 "sha256", 00:22:19.856 "sha384", 00:22:19.856 "sha512" 00:22:19.856 ], 00:22:19.856 "disable_auto_failback": false, 00:22:19.856 "fast_io_fail_timeout_sec": 0, 00:22:19.856 "generate_uuids": false, 00:22:19.856 "high_priority_weight": 0, 00:22:19.856 "io_path_stat": false, 00:22:19.856 "io_queue_requests": 512, 00:22:19.856 "keep_alive_timeout_ms": 10000, 00:22:19.856 "low_priority_weight": 0, 00:22:19.856 "medium_priority_weight": 0, 00:22:19.856 "nvme_adminq_poll_period_us": 10000, 00:22:19.856 "nvme_error_stat": false, 00:22:19.856 "nvme_ioq_poll_period_us": 0, 00:22:19.856 "rdma_cm_event_timeout_ms": 0, 00:22:19.856 "rdma_max_cq_size": 0, 00:22:19.856 "rdma_srq_size": 0, 00:22:19.856 "reconnect_delay_sec": 0, 00:22:19.856 "timeout_admin_us": 0, 00:22:19.856 "timeout_us": 0, 00:22:19.856 "transport_ack_timeout": 0, 00:22:19.856 "transport_retry_count": 4, 00:22:19.856 "transport_tos": 0 00:22:19.856 } 00:22:19.856 }, 00:22:19.856 { 00:22:19.856 "method": "bdev_nvme_attach_controller", 00:22:19.856 "params": { 00:22:19.856 "adrfam": "IPv4", 00:22:19.856 "ctrlr_loss_timeout_sec": 0, 00:22:19.856 "ddgst": false, 00:22:19.856 "fast_io_fail_timeout_sec": 0, 00:22:19.856 "hdgst": false, 00:22:19.856 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:19.856 "name": "nvme0", 00:22:19.856 "prchk_guard": false, 00:22:19.856 "prchk_reftag": false, 00:22:19.856 "psk": "key0", 00:22:19.856 "reconnect_delay_sec": 0, 00:22:19.856 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:19.856 "traddr": "127.0.0.1", 00:22:19.856 "trsvcid": "4420", 00:22:19.856 "trtype": "TCP" 00:22:19.856 } 00:22:19.856 }, 00:22:19.856 { 00:22:19.856 "method": "bdev_nvme_set_hotplug", 00:22:19.856 "params": { 00:22:19.856 "enable": false, 00:22:19.856 "period_us": 100000 00:22:19.856 } 00:22:19.856 }, 00:22:19.856 { 00:22:19.856 "method": "bdev_wait_for_examine" 00:22:19.856 } 00:22:19.856 ] 00:22:19.856 }, 00:22:19.856 { 00:22:19.856 "subsystem": "nbd", 00:22:19.856 "config": [] 00:22:19.856 } 00:22:19.856 ] 00:22:19.856 }' 00:22:19.856 16:33:53 -- keyring/file.sh@111 -- # killprocess 92513 00:22:19.856 16:33:53 -- common/autotest_common.sh@936 -- # '[' -z 92513 ']' 00:22:19.856 16:33:53 -- common/autotest_common.sh@940 -- # kill -0 92513 00:22:19.856 16:33:53 -- common/autotest_common.sh@941 -- # uname 00:22:19.856 16:33:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:19.856 16:33:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92513 00:22:19.856 16:33:53 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:19.856 16:33:53 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:19.856 killing process with pid 92513 00:22:19.856 16:33:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92513' 00:22:19.856 16:33:53 -- common/autotest_common.sh@955 -- # kill 92513 00:22:19.856 Received shutdown signal, test time was about 1.000000 seconds 00:22:19.856 00:22:19.856 Latency(us) 00:22:19.856 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:19.856 =================================================================================================================== 00:22:19.856 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:19.856 16:33:53 -- common/autotest_common.sh@960 -- # wait 92513 00:22:20.115 16:33:53 -- keyring/file.sh@114 -- # bperfpid=93000 00:22:20.115 16:33:53 -- keyring/file.sh@112 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:22:20.115 16:33:53 -- keyring/file.sh@116 -- # waitforlisten 93000 /var/tmp/bperf.sock 00:22:20.115 16:33:53 -- common/autotest_common.sh@817 -- # '[' -z 93000 ']' 00:22:20.115 16:33:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:20.115 16:33:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:20.115 16:33:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:20.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:20.115 16:33:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:20.115 16:33:53 -- common/autotest_common.sh@10 -- # set +x 00:22:20.115 16:33:53 -- keyring/file.sh@112 -- # echo '{ 00:22:20.115 "subsystems": [ 00:22:20.115 { 00:22:20.115 "subsystem": "keyring", 00:22:20.115 "config": [ 00:22:20.115 { 00:22:20.115 "method": "keyring_file_add_key", 00:22:20.115 "params": { 00:22:20.115 "name": "key0", 00:22:20.115 "path": "/tmp/tmp.d3lWA0ZnIi" 00:22:20.115 } 00:22:20.115 }, 00:22:20.115 { 00:22:20.115 "method": "keyring_file_add_key", 00:22:20.115 "params": { 00:22:20.115 "name": "key1", 00:22:20.115 "path": "/tmp/tmp.Ys3wl2ldW2" 00:22:20.115 } 00:22:20.115 } 00:22:20.115 ] 00:22:20.115 }, 00:22:20.115 { 00:22:20.115 "subsystem": "iobuf", 00:22:20.115 "config": [ 00:22:20.115 { 00:22:20.115 "method": "iobuf_set_options", 00:22:20.115 "params": { 00:22:20.115 "large_bufsize": 135168, 00:22:20.115 "large_pool_count": 1024, 00:22:20.115 "small_bufsize": 8192, 00:22:20.115 "small_pool_count": 8192 00:22:20.115 } 00:22:20.115 } 00:22:20.115 ] 00:22:20.115 }, 00:22:20.115 { 00:22:20.115 "subsystem": "sock", 00:22:20.115 "config": [ 00:22:20.115 { 00:22:20.115 "method": "sock_impl_set_options", 00:22:20.115 "params": { 00:22:20.115 "enable_ktls": false, 00:22:20.115 "enable_placement_id": 0, 00:22:20.115 "enable_quickack": false, 00:22:20.115 "enable_recv_pipe": true, 00:22:20.115 "enable_zerocopy_send_client": false, 00:22:20.115 "enable_zerocopy_send_server": true, 00:22:20.115 "impl_name": "posix", 00:22:20.115 "recv_buf_size": 2097152, 00:22:20.115 "send_buf_size": 2097152, 00:22:20.115 "tls_version": 0, 00:22:20.115 "zerocopy_threshold": 0 00:22:20.115 } 00:22:20.115 }, 00:22:20.115 { 00:22:20.115 "method": "sock_impl_set_options", 00:22:20.115 "params": { 00:22:20.115 "enable_ktls": false, 00:22:20.115 "enable_placement_id": 0, 00:22:20.115 "enable_quickack": false, 00:22:20.115 "enable_recv_pipe": true, 00:22:20.115 "enable_zerocopy_send_client": false, 00:22:20.115 "enable_zerocopy_send_server": true, 00:22:20.115 "impl_name": "ssl", 00:22:20.115 "recv_buf_size": 4096, 00:22:20.115 "send_buf_size": 4096, 00:22:20.115 "tls_version": 0, 00:22:20.115 "zerocopy_threshold": 0 00:22:20.115 } 00:22:20.115 } 00:22:20.115 ] 00:22:20.115 }, 00:22:20.115 { 00:22:20.115 "subsystem": "vmd", 00:22:20.115 "config": [] 00:22:20.115 }, 00:22:20.115 { 00:22:20.115 "subsystem": "accel", 00:22:20.115 "config": [ 00:22:20.115 { 00:22:20.115 "method": "accel_set_options", 00:22:20.115 "params": { 00:22:20.115 "buf_count": 2048, 00:22:20.115 "large_cache_size": 16, 00:22:20.115 "sequence_count": 2048, 00:22:20.115 "small_cache_size": 128, 00:22:20.115 "task_count": 2048 00:22:20.115 } 00:22:20.115 } 00:22:20.115 ] 00:22:20.115 }, 00:22:20.115 { 00:22:20.115 "subsystem": "bdev", 00:22:20.115 "config": [ 00:22:20.115 { 00:22:20.115 "method": "bdev_set_options", 00:22:20.115 "params": { 00:22:20.115 "bdev_auto_examine": true, 00:22:20.115 "bdev_io_cache_size": 256, 00:22:20.115 "bdev_io_pool_size": 65535, 00:22:20.115 "iobuf_large_cache_size": 16, 00:22:20.115 "iobuf_small_cache_size": 128 00:22:20.115 } 00:22:20.115 }, 00:22:20.115 { 00:22:20.115 "method": "bdev_raid_set_options", 00:22:20.115 "params": { 00:22:20.115 "process_window_size_kb": 1024 00:22:20.115 } 00:22:20.115 }, 00:22:20.115 { 00:22:20.115 "method": "bdev_iscsi_set_options", 00:22:20.115 "params": { 00:22:20.115 "timeout_sec": 30 00:22:20.115 } 00:22:20.115 }, 00:22:20.115 { 00:22:20.115 "method": "bdev_nvme_set_options", 00:22:20.115 "params": { 00:22:20.115 "action_on_timeout": "none", 00:22:20.115 "allow_accel_sequence": false, 00:22:20.115 "arbitration_burst": 0, 00:22:20.115 "bdev_retry_count": 3, 00:22:20.115 "ctrlr_loss_timeout_sec": 0, 00:22:20.115 "delay_cmd_submit": true, 00:22:20.115 "dhchap_dhgroups": [ 00:22:20.115 "null", 00:22:20.115 "ffdhe2048", 00:22:20.115 "ffdhe3072", 00:22:20.115 "ffdhe4096", 00:22:20.115 "ffdhe6144", 00:22:20.115 "ffdhe8192" 00:22:20.115 ], 00:22:20.115 "dhchap_digests": [ 00:22:20.115 "sha256", 00:22:20.115 "sha384", 00:22:20.115 "sha512" 00:22:20.115 ], 00:22:20.115 "disable_auto_failback": false, 00:22:20.115 "fast_io_fail_timeout_sec": 0, 00:22:20.115 "generate_uuids": false, 00:22:20.115 "high_priority_weight": 0, 00:22:20.115 "io_path_stat": false, 00:22:20.115 "io_queue_requests": 512, 00:22:20.115 "keep_alive_timeout_ms": 10000, 00:22:20.115 "low_priority_weight": 0, 00:22:20.115 "medium_priority_weight": 0, 00:22:20.115 "nvme_adminq_poll_period_us": 10000, 00:22:20.115 "nvme_error_stat": false, 00:22:20.115 "nvme_ioq_poll_period_us": 0, 00:22:20.115 "rdma_cm_event_timeout_ms": 0, 00:22:20.115 "rdma_max_cq_size": 0, 00:22:20.115 "rdma_srq_size": 0, 00:22:20.115 "reconnect_delay_sec": 0, 00:22:20.115 "timeout_admin_us": 0, 00:22:20.115 "timeout_us": 0, 00:22:20.115 "transport_ack_timeout": 0, 00:22:20.115 "transport_retry_count": 4, 00:22:20.115 "transport_tos": 0 00:22:20.115 } 00:22:20.115 }, 00:22:20.115 { 00:22:20.115 "method": "bdev_nvme_attach_controller", 00:22:20.115 "params": { 00:22:20.115 "adrfam": "IPv4", 00:22:20.116 "ctrlr_loss_timeout_sec": 0, 00:22:20.116 "ddgst": false, 00:22:20.116 "fast_io_fail_timeout_sec": 0, 00:22:20.116 "hdgst": false, 00:22:20.116 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:20.116 "name": "nvme0", 00:22:20.116 "prchk_guard": false, 00:22:20.116 "prchk_reftag": false, 00:22:20.116 "psk": "key0", 00:22:20.116 "reconnect_delay_sec": 0, 00:22:20.116 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:20.116 "traddr": "127.0.0.1", 00:22:20.116 "trsvcid": "4420", 00:22:20.116 "trtype": "TCP" 00:22:20.116 } 00:22:20.116 }, 00:22:20.116 { 00:22:20.116 "method": "bdev_nvme_set_hotplug", 00:22:20.116 "params": { 00:22:20.116 "enable": false, 00:22:20.116 "period_us": 100000 00:22:20.116 } 00:22:20.116 }, 00:22:20.116 { 00:22:20.116 "method": "bdev_wait_for_examine" 00:22:20.116 } 00:22:20.116 ] 00:22:20.116 }, 00:22:20.116 { 00:22:20.116 "subsystem": "nbd", 00:22:20.116 "config": [] 00:22:20.116 } 00:22:20.116 ] 00:22:20.116 }' 00:22:20.116 [2024-04-17 16:33:54.029683] Starting SPDK v24.05-pre git sha1 74bc86fe4 / DPDK 23.11.0 initialization... 00:22:20.116 [2024-04-17 16:33:54.029806] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93000 ] 00:22:20.374 [2024-04-17 16:33:54.161017] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.374 [2024-04-17 16:33:54.267977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:20.633 [2024-04-17 16:33:54.444322] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:21.200 16:33:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:21.200 16:33:55 -- common/autotest_common.sh@850 -- # return 0 00:22:21.200 16:33:55 -- keyring/file.sh@117 -- # bperf_cmd keyring_get_keys 00:22:21.200 16:33:55 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:21.200 16:33:55 -- keyring/file.sh@117 -- # jq length 00:22:21.458 16:33:55 -- keyring/file.sh@117 -- # (( 2 == 2 )) 00:22:21.458 16:33:55 -- keyring/file.sh@118 -- # get_refcnt key0 00:22:21.458 16:33:55 -- keyring/common.sh@12 -- # get_key key0 00:22:21.458 16:33:55 -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:21.458 16:33:55 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:21.458 16:33:55 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:21.458 16:33:55 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:21.715 16:33:55 -- keyring/file.sh@118 -- # (( 2 == 2 )) 00:22:21.715 16:33:55 -- keyring/file.sh@119 -- # get_refcnt key1 00:22:21.715 16:33:55 -- keyring/common.sh@12 -- # get_key key1 00:22:21.715 16:33:55 -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:21.715 16:33:55 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:21.715 16:33:55 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:21.715 16:33:55 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:21.973 16:33:55 -- keyring/file.sh@119 -- # (( 1 == 1 )) 00:22:21.973 16:33:55 -- keyring/file.sh@120 -- # bperf_cmd bdev_nvme_get_controllers 00:22:21.973 16:33:55 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:22:21.973 16:33:55 -- keyring/file.sh@120 -- # jq -r '.[].name' 00:22:22.231 16:33:56 -- keyring/file.sh@120 -- # [[ nvme0 == nvme0 ]] 00:22:22.231 16:33:56 -- keyring/file.sh@1 -- # cleanup 00:22:22.231 16:33:56 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.d3lWA0ZnIi /tmp/tmp.Ys3wl2ldW2 00:22:22.231 16:33:56 -- keyring/file.sh@20 -- # killprocess 93000 00:22:22.231 16:33:56 -- common/autotest_common.sh@936 -- # '[' -z 93000 ']' 00:22:22.231 16:33:56 -- common/autotest_common.sh@940 -- # kill -0 93000 00:22:22.231 16:33:56 -- common/autotest_common.sh@941 -- # uname 00:22:22.231 16:33:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:22.231 16:33:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93000 00:22:22.231 16:33:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:22.231 killing process with pid 93000 00:22:22.231 16:33:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:22.231 16:33:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93000' 00:22:22.231 Received shutdown signal, test time was about 1.000000 seconds 00:22:22.231 00:22:22.231 Latency(us) 00:22:22.231 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.231 =================================================================================================================== 00:22:22.231 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:22.231 16:33:56 -- common/autotest_common.sh@955 -- # kill 93000 00:22:22.231 16:33:56 -- common/autotest_common.sh@960 -- # wait 93000 00:22:22.489 16:33:56 -- keyring/file.sh@21 -- # killprocess 92481 00:22:22.489 16:33:56 -- common/autotest_common.sh@936 -- # '[' -z 92481 ']' 00:22:22.489 16:33:56 -- common/autotest_common.sh@940 -- # kill -0 92481 00:22:22.489 16:33:56 -- common/autotest_common.sh@941 -- # uname 00:22:22.489 16:33:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:22.489 16:33:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92481 00:22:22.489 16:33:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:22.489 16:33:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:22.489 killing process with pid 92481 00:22:22.489 16:33:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92481' 00:22:22.489 16:33:56 -- common/autotest_common.sh@955 -- # kill 92481 00:22:22.489 [2024-04-17 16:33:56.446166] app.c: 930:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:22.489 16:33:56 -- common/autotest_common.sh@960 -- # wait 92481 00:22:23.057 00:22:23.057 real 0m17.414s 00:22:23.057 user 0m43.664s 00:22:23.057 sys 0m3.420s 00:22:23.057 16:33:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:23.057 ************************************ 00:22:23.057 END TEST keyring_file 00:22:23.057 ************************************ 00:22:23.057 16:33:56 -- common/autotest_common.sh@10 -- # set +x 00:22:23.057 16:33:56 -- spdk/autotest.sh@293 -- # [[ n == y ]] 00:22:23.057 16:33:56 -- spdk/autotest.sh@305 -- # '[' 0 -eq 1 ']' 00:22:23.057 16:33:56 -- spdk/autotest.sh@309 -- # '[' 0 -eq 1 ']' 00:22:23.057 16:33:56 -- spdk/autotest.sh@313 -- # '[' 0 -eq 1 ']' 00:22:23.057 16:33:56 -- spdk/autotest.sh@318 -- # '[' 0 -eq 1 ']' 00:22:23.057 16:33:56 -- spdk/autotest.sh@327 -- # '[' 0 -eq 1 ']' 00:22:23.057 16:33:56 -- spdk/autotest.sh@332 -- # '[' 0 -eq 1 ']' 00:22:23.057 16:33:56 -- spdk/autotest.sh@336 -- # '[' 0 -eq 1 ']' 00:22:23.057 16:33:56 -- spdk/autotest.sh@340 -- # '[' 0 -eq 1 ']' 00:22:23.057 16:33:56 -- spdk/autotest.sh@344 -- # '[' 0 -eq 1 ']' 00:22:23.057 16:33:56 -- spdk/autotest.sh@349 -- # '[' 0 -eq 1 ']' 00:22:23.057 16:33:56 -- spdk/autotest.sh@353 -- # '[' 0 -eq 1 ']' 00:22:23.057 16:33:56 -- spdk/autotest.sh@360 -- # [[ 0 -eq 1 ]] 00:22:23.057 16:33:56 -- spdk/autotest.sh@364 -- # [[ 0 -eq 1 ]] 00:22:23.057 16:33:56 -- spdk/autotest.sh@368 -- # [[ 0 -eq 1 ]] 00:22:23.057 16:33:56 -- spdk/autotest.sh@372 -- # [[ 0 -eq 1 ]] 00:22:23.057 16:33:56 -- spdk/autotest.sh@377 -- # trap - SIGINT SIGTERM EXIT 00:22:23.057 16:33:56 -- spdk/autotest.sh@379 -- # timing_enter post_cleanup 00:22:23.057 16:33:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:23.057 16:33:56 -- common/autotest_common.sh@10 -- # set +x 00:22:23.057 16:33:56 -- spdk/autotest.sh@380 -- # autotest_cleanup 00:22:23.057 16:33:56 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:22:23.057 16:33:56 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:22:23.057 16:33:56 -- common/autotest_common.sh@10 -- # set +x 00:22:24.963 INFO: APP EXITING 00:22:24.963 INFO: killing all VMs 00:22:24.963 INFO: killing vhost app 00:22:24.963 INFO: EXIT DONE 00:22:25.531 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:25.531 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:22:25.531 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:22:26.098 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:26.357 Cleaning 00:22:26.357 Removing: /var/run/dpdk/spdk0/config 00:22:26.357 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:22:26.357 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:22:26.357 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:22:26.357 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:22:26.357 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:22:26.357 Removing: /var/run/dpdk/spdk0/hugepage_info 00:22:26.357 Removing: /var/run/dpdk/spdk1/config 00:22:26.357 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:22:26.357 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:22:26.357 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:22:26.357 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:22:26.357 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:22:26.357 Removing: /var/run/dpdk/spdk1/hugepage_info 00:22:26.357 Removing: /var/run/dpdk/spdk2/config 00:22:26.357 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:22:26.357 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:22:26.357 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:22:26.357 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:22:26.357 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:22:26.357 Removing: /var/run/dpdk/spdk2/hugepage_info 00:22:26.357 Removing: /var/run/dpdk/spdk3/config 00:22:26.357 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:22:26.357 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:22:26.357 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:22:26.357 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:22:26.357 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:22:26.357 Removing: /var/run/dpdk/spdk3/hugepage_info 00:22:26.357 Removing: /var/run/dpdk/spdk4/config 00:22:26.357 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:22:26.357 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:22:26.357 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:22:26.357 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:22:26.357 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:22:26.357 Removing: /var/run/dpdk/spdk4/hugepage_info 00:22:26.357 Removing: /dev/shm/nvmf_trace.0 00:22:26.357 Removing: /dev/shm/spdk_tgt_trace.pid60364 00:22:26.357 Removing: /var/run/dpdk/spdk0 00:22:26.357 Removing: /var/run/dpdk/spdk1 00:22:26.357 Removing: /var/run/dpdk/spdk2 00:22:26.357 Removing: /var/run/dpdk/spdk3 00:22:26.357 Removing: /var/run/dpdk/spdk4 00:22:26.357 Removing: /var/run/dpdk/spdk_pid60189 00:22:26.357 Removing: /var/run/dpdk/spdk_pid60364 00:22:26.357 Removing: /var/run/dpdk/spdk_pid60699 00:22:26.357 Removing: /var/run/dpdk/spdk_pid60979 00:22:26.357 Removing: /var/run/dpdk/spdk_pid61160 00:22:26.357 Removing: /var/run/dpdk/spdk_pid61248 00:22:26.357 Removing: /var/run/dpdk/spdk_pid61345 00:22:26.357 Removing: /var/run/dpdk/spdk_pid61449 00:22:26.357 Removing: /var/run/dpdk/spdk_pid61486 00:22:26.357 Removing: /var/run/dpdk/spdk_pid61531 00:22:26.357 Removing: /var/run/dpdk/spdk_pid61598 00:22:26.357 Removing: /var/run/dpdk/spdk_pid61723 00:22:26.357 Removing: /var/run/dpdk/spdk_pid62371 00:22:26.357 Removing: /var/run/dpdk/spdk_pid62440 00:22:26.357 Removing: /var/run/dpdk/spdk_pid62513 00:22:26.357 Removing: /var/run/dpdk/spdk_pid62541 00:22:26.357 Removing: /var/run/dpdk/spdk_pid62624 00:22:26.357 Removing: /var/run/dpdk/spdk_pid62652 00:22:26.357 Removing: /var/run/dpdk/spdk_pid62735 00:22:26.357 Removing: /var/run/dpdk/spdk_pid62763 00:22:26.357 Removing: /var/run/dpdk/spdk_pid62824 00:22:26.357 Removing: /var/run/dpdk/spdk_pid62854 00:22:26.357 Removing: /var/run/dpdk/spdk_pid62904 00:22:26.357 Removing: /var/run/dpdk/spdk_pid62934 00:22:26.357 Removing: /var/run/dpdk/spdk_pid63095 00:22:26.357 Removing: /var/run/dpdk/spdk_pid63136 00:22:26.357 Removing: /var/run/dpdk/spdk_pid63216 00:22:26.357 Removing: /var/run/dpdk/spdk_pid63300 00:22:26.357 Removing: /var/run/dpdk/spdk_pid63329 00:22:26.357 Removing: /var/run/dpdk/spdk_pid63405 00:22:26.357 Removing: /var/run/dpdk/spdk_pid63449 00:22:26.358 Removing: /var/run/dpdk/spdk_pid63487 00:22:26.358 Removing: /var/run/dpdk/spdk_pid63527 00:22:26.358 Removing: /var/run/dpdk/spdk_pid63565 00:22:26.358 Removing: /var/run/dpdk/spdk_pid63605 00:22:26.358 Removing: /var/run/dpdk/spdk_pid63649 00:22:26.358 Removing: /var/run/dpdk/spdk_pid63687 00:22:26.358 Removing: /var/run/dpdk/spdk_pid63727 00:22:26.358 Removing: /var/run/dpdk/spdk_pid63771 00:22:26.358 Removing: /var/run/dpdk/spdk_pid63810 00:22:26.617 Removing: /var/run/dpdk/spdk_pid63848 00:22:26.617 Removing: /var/run/dpdk/spdk_pid63892 00:22:26.617 Removing: /var/run/dpdk/spdk_pid63927 00:22:26.617 Removing: /var/run/dpdk/spdk_pid63972 00:22:26.617 Removing: /var/run/dpdk/spdk_pid64012 00:22:26.617 Removing: /var/run/dpdk/spdk_pid64050 00:22:26.617 Removing: /var/run/dpdk/spdk_pid64097 00:22:26.617 Removing: /var/run/dpdk/spdk_pid64140 00:22:26.617 Removing: /var/run/dpdk/spdk_pid64178 00:22:26.617 Removing: /var/run/dpdk/spdk_pid64218 00:22:26.617 Removing: /var/run/dpdk/spdk_pid64293 00:22:26.617 Removing: /var/run/dpdk/spdk_pid64413 00:22:26.617 Removing: /var/run/dpdk/spdk_pid64848 00:22:26.617 Removing: /var/run/dpdk/spdk_pid68317 00:22:26.617 Removing: /var/run/dpdk/spdk_pid68675 00:22:26.617 Removing: /var/run/dpdk/spdk_pid69772 00:22:26.617 Removing: /var/run/dpdk/spdk_pid70156 00:22:26.617 Removing: /var/run/dpdk/spdk_pid70424 00:22:26.617 Removing: /var/run/dpdk/spdk_pid70474 00:22:26.617 Removing: /var/run/dpdk/spdk_pid71358 00:22:26.617 Removing: /var/run/dpdk/spdk_pid71404 00:22:26.617 Removing: /var/run/dpdk/spdk_pid71792 00:22:26.617 Removing: /var/run/dpdk/spdk_pid72333 00:22:26.617 Removing: /var/run/dpdk/spdk_pid72785 00:22:26.617 Removing: /var/run/dpdk/spdk_pid73758 00:22:26.617 Removing: /var/run/dpdk/spdk_pid74754 00:22:26.617 Removing: /var/run/dpdk/spdk_pid74876 00:22:26.617 Removing: /var/run/dpdk/spdk_pid74941 00:22:26.617 Removing: /var/run/dpdk/spdk_pid76440 00:22:26.617 Removing: /var/run/dpdk/spdk_pid76679 00:22:26.617 Removing: /var/run/dpdk/spdk_pid77131 00:22:26.617 Removing: /var/run/dpdk/spdk_pid77241 00:22:26.617 Removing: /var/run/dpdk/spdk_pid77392 00:22:26.617 Removing: /var/run/dpdk/spdk_pid77438 00:22:26.617 Removing: /var/run/dpdk/spdk_pid77488 00:22:26.617 Removing: /var/run/dpdk/spdk_pid77529 00:22:26.617 Removing: /var/run/dpdk/spdk_pid77693 00:22:26.617 Removing: /var/run/dpdk/spdk_pid77852 00:22:26.617 Removing: /var/run/dpdk/spdk_pid78128 00:22:26.617 Removing: /var/run/dpdk/spdk_pid78246 00:22:26.617 Removing: /var/run/dpdk/spdk_pid78500 00:22:26.617 Removing: /var/run/dpdk/spdk_pid78631 00:22:26.617 Removing: /var/run/dpdk/spdk_pid78766 00:22:26.617 Removing: /var/run/dpdk/spdk_pid79109 00:22:26.617 Removing: /var/run/dpdk/spdk_pid79530 00:22:26.617 Removing: /var/run/dpdk/spdk_pid79849 00:22:26.617 Removing: /var/run/dpdk/spdk_pid80368 00:22:26.617 Removing: /var/run/dpdk/spdk_pid80370 00:22:26.617 Removing: /var/run/dpdk/spdk_pid80716 00:22:26.617 Removing: /var/run/dpdk/spdk_pid80730 00:22:26.617 Removing: /var/run/dpdk/spdk_pid80751 00:22:26.617 Removing: /var/run/dpdk/spdk_pid80780 00:22:26.617 Removing: /var/run/dpdk/spdk_pid80786 00:22:26.617 Removing: /var/run/dpdk/spdk_pid81095 00:22:26.617 Removing: /var/run/dpdk/spdk_pid81138 00:22:26.617 Removing: /var/run/dpdk/spdk_pid81476 00:22:26.617 Removing: /var/run/dpdk/spdk_pid81726 00:22:26.617 Removing: /var/run/dpdk/spdk_pid82234 00:22:26.617 Removing: /var/run/dpdk/spdk_pid82775 00:22:26.617 Removing: /var/run/dpdk/spdk_pid83366 00:22:26.617 Removing: /var/run/dpdk/spdk_pid83372 00:22:26.617 Removing: /var/run/dpdk/spdk_pid85374 00:22:26.617 Removing: /var/run/dpdk/spdk_pid85461 00:22:26.617 Removing: /var/run/dpdk/spdk_pid85552 00:22:26.617 Removing: /var/run/dpdk/spdk_pid85648 00:22:26.617 Removing: /var/run/dpdk/spdk_pid85818 00:22:26.617 Removing: /var/run/dpdk/spdk_pid85908 00:22:26.617 Removing: /var/run/dpdk/spdk_pid85994 00:22:26.617 Removing: /var/run/dpdk/spdk_pid86089 00:22:26.617 Removing: /var/run/dpdk/spdk_pid86442 00:22:26.617 Removing: /var/run/dpdk/spdk_pid87140 00:22:26.618 Removing: /var/run/dpdk/spdk_pid88514 00:22:26.618 Removing: /var/run/dpdk/spdk_pid88724 00:22:26.618 Removing: /var/run/dpdk/spdk_pid89010 00:22:26.618 Removing: /var/run/dpdk/spdk_pid89315 00:22:26.618 Removing: /var/run/dpdk/spdk_pid89878 00:22:26.618 Removing: /var/run/dpdk/spdk_pid89885 00:22:26.618 Removing: /var/run/dpdk/spdk_pid90256 00:22:26.618 Removing: /var/run/dpdk/spdk_pid90419 00:22:26.618 Removing: /var/run/dpdk/spdk_pid90580 00:22:26.618 Removing: /var/run/dpdk/spdk_pid90677 00:22:26.618 Removing: /var/run/dpdk/spdk_pid90832 00:22:26.618 Removing: /var/run/dpdk/spdk_pid90950 00:22:26.618 Removing: /var/run/dpdk/spdk_pid91649 00:22:26.618 Removing: /var/run/dpdk/spdk_pid91684 00:22:26.877 Removing: /var/run/dpdk/spdk_pid91719 00:22:26.877 Removing: /var/run/dpdk/spdk_pid91972 00:22:26.877 Removing: /var/run/dpdk/spdk_pid92004 00:22:26.877 Removing: /var/run/dpdk/spdk_pid92042 00:22:26.877 Removing: /var/run/dpdk/spdk_pid92481 00:22:26.877 Removing: /var/run/dpdk/spdk_pid92513 00:22:26.877 Removing: /var/run/dpdk/spdk_pid93000 00:22:26.877 Clean 00:22:26.877 16:34:00 -- common/autotest_common.sh@1437 -- # return 0 00:22:26.877 16:34:00 -- spdk/autotest.sh@381 -- # timing_exit post_cleanup 00:22:26.877 16:34:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:26.877 16:34:00 -- common/autotest_common.sh@10 -- # set +x 00:22:26.877 16:34:00 -- spdk/autotest.sh@383 -- # timing_exit autotest 00:22:26.877 16:34:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:26.877 16:34:00 -- common/autotest_common.sh@10 -- # set +x 00:22:26.877 16:34:00 -- spdk/autotest.sh@384 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:26.877 16:34:00 -- spdk/autotest.sh@386 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:22:26.877 16:34:00 -- spdk/autotest.sh@386 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:22:26.877 16:34:00 -- spdk/autotest.sh@388 -- # hash lcov 00:22:26.877 16:34:00 -- spdk/autotest.sh@388 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:22:26.877 16:34:00 -- spdk/autotest.sh@390 -- # hostname 00:22:26.877 16:34:00 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1705279005-2131 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:22:27.135 geninfo: WARNING: invalid characters removed from testname! 00:22:53.673 16:34:27 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:57.857 16:34:31 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:59.758 16:34:33 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:03.045 16:34:36 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:04.950 16:34:38 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:07.481 16:34:41 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:10.012 16:34:43 -- spdk/autotest.sh@397 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:23:10.012 16:34:44 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:10.012 16:34:44 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:23:10.012 16:34:44 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:10.012 16:34:44 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:10.012 16:34:44 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.012 16:34:44 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.012 16:34:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.012 16:34:44 -- paths/export.sh@5 -- $ export PATH 00:23:10.012 16:34:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.012 16:34:44 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:23:10.012 16:34:44 -- common/autobuild_common.sh@435 -- $ date +%s 00:23:10.271 16:34:44 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713371684.XXXXXX 00:23:10.271 16:34:44 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713371684.fF1mXq 00:23:10.271 16:34:44 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:23:10.271 16:34:44 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:23:10.271 16:34:44 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:23:10.271 16:34:44 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:23:10.271 16:34:44 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:23:10.271 16:34:44 -- common/autobuild_common.sh@451 -- $ get_config_params 00:23:10.271 16:34:44 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:23:10.271 16:34:44 -- common/autotest_common.sh@10 -- $ set +x 00:23:10.271 16:34:44 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:23:10.271 16:34:44 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:23:10.271 16:34:44 -- pm/common@17 -- $ local monitor 00:23:10.271 16:34:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:10.271 16:34:44 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=94688 00:23:10.271 16:34:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:10.271 16:34:44 -- pm/common@21 -- $ date +%s 00:23:10.271 16:34:44 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=94690 00:23:10.271 16:34:44 -- pm/common@26 -- $ sleep 1 00:23:10.271 16:34:44 -- pm/common@21 -- $ date +%s 00:23:10.271 16:34:44 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1713371684 00:23:10.271 16:34:44 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1713371684 00:23:10.271 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1713371684_collect-vmstat.pm.log 00:23:10.271 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1713371684_collect-cpu-load.pm.log 00:23:11.206 16:34:45 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:23:11.206 16:34:45 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:23:11.206 16:34:45 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:23:11.206 16:34:45 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:23:11.206 16:34:45 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:23:11.206 16:34:45 -- spdk/autopackage.sh@19 -- $ timing_finish 00:23:11.206 16:34:45 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:23:11.207 16:34:45 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:23:11.207 16:34:45 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:11.207 16:34:45 -- spdk/autopackage.sh@20 -- $ exit 0 00:23:11.207 16:34:45 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:23:11.207 16:34:45 -- pm/common@30 -- $ signal_monitor_resources TERM 00:23:11.207 16:34:45 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:23:11.207 16:34:45 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:11.207 16:34:45 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:23:11.207 16:34:45 -- pm/common@45 -- $ pid=94695 00:23:11.207 16:34:45 -- pm/common@52 -- $ sudo kill -TERM 94695 00:23:11.207 16:34:45 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:11.207 16:34:45 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:23:11.207 16:34:45 -- pm/common@45 -- $ pid=94696 00:23:11.207 16:34:45 -- pm/common@52 -- $ sudo kill -TERM 94696 00:23:11.207 + [[ -n 5277 ]] 00:23:11.207 + sudo kill 5277 00:23:11.217 [Pipeline] } 00:23:11.235 [Pipeline] // timeout 00:23:11.242 [Pipeline] } 00:23:11.258 [Pipeline] // stage 00:23:11.262 [Pipeline] } 00:23:11.278 [Pipeline] // catchError 00:23:11.286 [Pipeline] stage 00:23:11.287 [Pipeline] { (Stop VM) 00:23:11.302 [Pipeline] sh 00:23:11.580 + vagrant halt 00:23:14.884 ==> default: Halting domain... 00:23:21.493 [Pipeline] sh 00:23:21.775 + vagrant destroy -f 00:23:25.082 ==> default: Removing domain... 00:23:25.353 [Pipeline] sh 00:23:25.632 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/output 00:23:25.642 [Pipeline] } 00:23:25.660 [Pipeline] // stage 00:23:25.667 [Pipeline] } 00:23:25.684 [Pipeline] // dir 00:23:25.690 [Pipeline] } 00:23:25.709 [Pipeline] // wrap 00:23:25.715 [Pipeline] } 00:23:25.730 [Pipeline] // catchError 00:23:25.739 [Pipeline] stage 00:23:25.741 [Pipeline] { (Epilogue) 00:23:25.757 [Pipeline] sh 00:23:26.037 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:23:32.652 [Pipeline] catchError 00:23:32.654 [Pipeline] { 00:23:32.669 [Pipeline] sh 00:23:32.973 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:23:32.973 Artifacts sizes are good 00:23:32.983 [Pipeline] } 00:23:32.997 [Pipeline] // catchError 00:23:33.007 [Pipeline] archiveArtifacts 00:23:33.012 Archiving artifacts 00:23:33.153 [Pipeline] cleanWs 00:23:33.165 [WS-CLEANUP] Deleting project workspace... 00:23:33.165 [WS-CLEANUP] Deferred wipeout is used... 00:23:33.171 [WS-CLEANUP] done 00:23:33.173 [Pipeline] } 00:23:33.192 [Pipeline] // stage 00:23:33.198 [Pipeline] } 00:23:33.216 [Pipeline] // node 00:23:33.222 [Pipeline] End of Pipeline 00:23:33.265 Finished: SUCCESS